Test Report: KVM_Linux_crio 20354

                    
                      f4981b37cef8a8edf9576fbca56a900d4b787caa:2025-02-03:38193
                    
                

Test fail (11/321)

x
+
TestAddons/parallel/Ingress (153.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-106432 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-106432 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-106432 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [353ea63f-c6cb-41d2-a99b-ede66853eb91] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [353ea63f-c6cb-41d2-a99b-ede66853eb91] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003532239s
I0203 10:36:32.393833  116606 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-106432 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.309404585s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-106432 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-106432 -n addons-106432
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-106432 logs -n 25: (1.249612167s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-730636                                                                     | download-only-730636 | jenkins | v1.35.0 | 03 Feb 25 10:33 UTC | 03 Feb 25 10:33 UTC |
	| delete  | -p download-only-677633                                                                     | download-only-677633 | jenkins | v1.35.0 | 03 Feb 25 10:33 UTC | 03 Feb 25 10:33 UTC |
	| delete  | -p download-only-730636                                                                     | download-only-730636 | jenkins | v1.35.0 | 03 Feb 25 10:33 UTC | 03 Feb 25 10:33 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-745865 | jenkins | v1.35.0 | 03 Feb 25 10:33 UTC |                     |
	|         | binary-mirror-745865                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43989                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-745865                                                                     | binary-mirror-745865 | jenkins | v1.35.0 | 03 Feb 25 10:33 UTC | 03 Feb 25 10:33 UTC |
	| addons  | disable dashboard -p                                                                        | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:33 UTC |                     |
	|         | addons-106432                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:33 UTC |                     |
	|         | addons-106432                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-106432 --wait=true                                                                | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:33 UTC | 03 Feb 25 10:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-106432 addons disable                                                                | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:35 UTC | 03 Feb 25 10:35 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-106432 addons disable                                                                | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:35 UTC | 03 Feb 25 10:36 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:36 UTC |
	|         | -p addons-106432                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-106432 addons                                                                        | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:36 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-106432 addons disable                                                                | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:36 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-106432 addons                                                                        | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:36 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-106432 ip                                                                            | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:36 UTC |
	| addons  | addons-106432 addons disable                                                                | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:36 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-106432 addons disable                                                                | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:36 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-106432 ssh curl -s                                                                   | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-106432 ssh cat                                                                       | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:36 UTC |
	|         | /opt/local-path-provisioner/pvc-3aa481ba-a49b-47b8-bb6c-20fb974304cd_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-106432 addons disable                                                                | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:37 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-106432 addons                                                                        | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:36 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-106432 addons                                                                        | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:36 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-106432 addons                                                                        | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:36 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-106432 addons                                                                        | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:36 UTC | 03 Feb 25 10:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-106432 ip                                                                            | addons-106432        | jenkins | v1.35.0 | 03 Feb 25 10:38 UTC | 03 Feb 25 10:38 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 10:33:24
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 10:33:24.573484  117311 out.go:345] Setting OutFile to fd 1 ...
	I0203 10:33:24.574026  117311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:33:24.574049  117311 out.go:358] Setting ErrFile to fd 2...
	I0203 10:33:24.574058  117311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:33:24.574494  117311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 10:33:24.575346  117311 out.go:352] Setting JSON to false
	I0203 10:33:24.576294  117311 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4547,"bootTime":1738574258,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 10:33:24.576420  117311 start.go:139] virtualization: kvm guest
	I0203 10:33:24.578061  117311 out.go:177] * [addons-106432] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 10:33:24.579412  117311 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 10:33:24.579415  117311 notify.go:220] Checking for updates...
	I0203 10:33:24.581351  117311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 10:33:24.582240  117311 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 10:33:24.583192  117311 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 10:33:24.584132  117311 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 10:33:24.585047  117311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 10:33:24.586139  117311 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 10:33:24.615865  117311 out.go:177] * Using the kvm2 driver based on user configuration
	I0203 10:33:24.616838  117311 start.go:297] selected driver: kvm2
	I0203 10:33:24.616848  117311 start.go:901] validating driver "kvm2" against <nil>
	I0203 10:33:24.616868  117311 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 10:33:24.617526  117311 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 10:33:24.617623  117311 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 10:33:24.631819  117311 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 10:33:24.631869  117311 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 10:33:24.632134  117311 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 10:33:24.632178  117311 cni.go:84] Creating CNI manager for ""
	I0203 10:33:24.632227  117311 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 10:33:24.632239  117311 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0203 10:33:24.632290  117311 start.go:340] cluster config:
	{Name:addons-106432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-106432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0203 10:33:24.632397  117311 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 10:33:24.634031  117311 out.go:177] * Starting "addons-106432" primary control-plane node in "addons-106432" cluster
	I0203 10:33:24.635027  117311 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 10:33:24.635060  117311 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0203 10:33:24.635070  117311 cache.go:56] Caching tarball of preloaded images
	I0203 10:33:24.635147  117311 preload.go:172] Found /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0203 10:33:24.635158  117311 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0203 10:33:24.635487  117311 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/config.json ...
	I0203 10:33:24.635508  117311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/config.json: {Name:mk5234bd6b05226493f6e34e0fd7904f196d3216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:24.635646  117311 start.go:360] acquireMachinesLock for addons-106432: {Name:mk4d774b88f87fe0539ca3e30dd98aae8a4d5437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 10:33:24.635690  117311 start.go:364] duration metric: took 31.018µs to acquireMachinesLock for "addons-106432"
	I0203 10:33:24.635707  117311 start.go:93] Provisioning new machine with config: &{Name:addons-106432 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-106432 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 10:33:24.635768  117311 start.go:125] createHost starting for "" (driver="kvm2")
	I0203 10:33:24.637179  117311 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0203 10:33:24.637339  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:33:24.637381  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:33:24.652223  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45187
	I0203 10:33:24.652655  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:33:24.653268  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:33:24.653317  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:33:24.653672  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:33:24.653883  117311 main.go:141] libmachine: (addons-106432) Calling .GetMachineName
	I0203 10:33:24.654032  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:33:24.654201  117311 start.go:159] libmachine.API.Create for "addons-106432" (driver="kvm2")
	I0203 10:33:24.654227  117311 client.go:168] LocalClient.Create starting
	I0203 10:33:24.654270  117311 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem
	I0203 10:33:24.789786  117311 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem
	I0203 10:33:25.090238  117311 main.go:141] libmachine: Running pre-create checks...
	I0203 10:33:25.090271  117311 main.go:141] libmachine: (addons-106432) Calling .PreCreateCheck
	I0203 10:33:25.090817  117311 main.go:141] libmachine: (addons-106432) Calling .GetConfigRaw
	I0203 10:33:25.091334  117311 main.go:141] libmachine: Creating machine...
	I0203 10:33:25.091351  117311 main.go:141] libmachine: (addons-106432) Calling .Create
	I0203 10:33:25.091506  117311 main.go:141] libmachine: (addons-106432) creating KVM machine...
	I0203 10:33:25.091521  117311 main.go:141] libmachine: (addons-106432) creating network...
	I0203 10:33:25.092686  117311 main.go:141] libmachine: (addons-106432) DBG | found existing default KVM network
	I0203 10:33:25.093562  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:25.093401  117333 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015cc0}
	I0203 10:33:25.093583  117311 main.go:141] libmachine: (addons-106432) DBG | created network xml: 
	I0203 10:33:25.093596  117311 main.go:141] libmachine: (addons-106432) DBG | <network>
	I0203 10:33:25.093612  117311 main.go:141] libmachine: (addons-106432) DBG |   <name>mk-addons-106432</name>
	I0203 10:33:25.093625  117311 main.go:141] libmachine: (addons-106432) DBG |   <dns enable='no'/>
	I0203 10:33:25.093633  117311 main.go:141] libmachine: (addons-106432) DBG |   
	I0203 10:33:25.093640  117311 main.go:141] libmachine: (addons-106432) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0203 10:33:25.093648  117311 main.go:141] libmachine: (addons-106432) DBG |     <dhcp>
	I0203 10:33:25.093653  117311 main.go:141] libmachine: (addons-106432) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0203 10:33:25.093657  117311 main.go:141] libmachine: (addons-106432) DBG |     </dhcp>
	I0203 10:33:25.093662  117311 main.go:141] libmachine: (addons-106432) DBG |   </ip>
	I0203 10:33:25.093668  117311 main.go:141] libmachine: (addons-106432) DBG |   
	I0203 10:33:25.093675  117311 main.go:141] libmachine: (addons-106432) DBG | </network>
	I0203 10:33:25.093686  117311 main.go:141] libmachine: (addons-106432) DBG | 
	I0203 10:33:25.099718  117311 main.go:141] libmachine: (addons-106432) DBG | trying to create private KVM network mk-addons-106432 192.168.39.0/24...
	I0203 10:33:25.167900  117311 main.go:141] libmachine: (addons-106432) setting up store path in /home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432 ...
	I0203 10:33:25.167937  117311 main.go:141] libmachine: (addons-106432) DBG | private KVM network mk-addons-106432 192.168.39.0/24 created
	I0203 10:33:25.167950  117311 main.go:141] libmachine: (addons-106432) building disk image from file:///home/jenkins/minikube-integration/20354-109432/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0203 10:33:25.167970  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:25.167819  117333 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 10:33:25.167983  117311 main.go:141] libmachine: (addons-106432) Downloading /home/jenkins/minikube-integration/20354-109432/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20354-109432/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0203 10:33:25.453969  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:25.453755  117333 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa...
	I0203 10:33:25.620950  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:25.620832  117333 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/addons-106432.rawdisk...
	I0203 10:33:25.620980  117311 main.go:141] libmachine: (addons-106432) DBG | Writing magic tar header
	I0203 10:33:25.620990  117311 main.go:141] libmachine: (addons-106432) DBG | Writing SSH key tar header
	I0203 10:33:25.620997  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:25.620962  117333 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432 ...
	I0203 10:33:25.621078  117311 main.go:141] libmachine: (addons-106432) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432
	I0203 10:33:25.621103  117311 main.go:141] libmachine: (addons-106432) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432/.minikube/machines
	I0203 10:33:25.621111  117311 main.go:141] libmachine: (addons-106432) setting executable bit set on /home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432 (perms=drwx------)
	I0203 10:33:25.621123  117311 main.go:141] libmachine: (addons-106432) setting executable bit set on /home/jenkins/minikube-integration/20354-109432/.minikube/machines (perms=drwxr-xr-x)
	I0203 10:33:25.621129  117311 main.go:141] libmachine: (addons-106432) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 10:33:25.621139  117311 main.go:141] libmachine: (addons-106432) setting executable bit set on /home/jenkins/minikube-integration/20354-109432/.minikube (perms=drwxr-xr-x)
	I0203 10:33:25.621155  117311 main.go:141] libmachine: (addons-106432) setting executable bit set on /home/jenkins/minikube-integration/20354-109432 (perms=drwxrwxr-x)
	I0203 10:33:25.621169  117311 main.go:141] libmachine: (addons-106432) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0203 10:33:25.621180  117311 main.go:141] libmachine: (addons-106432) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0203 10:33:25.621185  117311 main.go:141] libmachine: (addons-106432) creating domain...
	I0203 10:33:25.621195  117311 main.go:141] libmachine: (addons-106432) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432
	I0203 10:33:25.621200  117311 main.go:141] libmachine: (addons-106432) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0203 10:33:25.621212  117311 main.go:141] libmachine: (addons-106432) DBG | checking permissions on dir: /home/jenkins
	I0203 10:33:25.621219  117311 main.go:141] libmachine: (addons-106432) DBG | checking permissions on dir: /home
	I0203 10:33:25.621225  117311 main.go:141] libmachine: (addons-106432) DBG | skipping /home - not owner
	I0203 10:33:25.622363  117311 main.go:141] libmachine: (addons-106432) define libvirt domain using xml: 
	I0203 10:33:25.622382  117311 main.go:141] libmachine: (addons-106432) <domain type='kvm'>
	I0203 10:33:25.622393  117311 main.go:141] libmachine: (addons-106432)   <name>addons-106432</name>
	I0203 10:33:25.622399  117311 main.go:141] libmachine: (addons-106432)   <memory unit='MiB'>4000</memory>
	I0203 10:33:25.622408  117311 main.go:141] libmachine: (addons-106432)   <vcpu>2</vcpu>
	I0203 10:33:25.622415  117311 main.go:141] libmachine: (addons-106432)   <features>
	I0203 10:33:25.622434  117311 main.go:141] libmachine: (addons-106432)     <acpi/>
	I0203 10:33:25.622443  117311 main.go:141] libmachine: (addons-106432)     <apic/>
	I0203 10:33:25.622452  117311 main.go:141] libmachine: (addons-106432)     <pae/>
	I0203 10:33:25.622466  117311 main.go:141] libmachine: (addons-106432)     
	I0203 10:33:25.622495  117311 main.go:141] libmachine: (addons-106432)   </features>
	I0203 10:33:25.622517  117311 main.go:141] libmachine: (addons-106432)   <cpu mode='host-passthrough'>
	I0203 10:33:25.622531  117311 main.go:141] libmachine: (addons-106432)   
	I0203 10:33:25.622546  117311 main.go:141] libmachine: (addons-106432)   </cpu>
	I0203 10:33:25.622555  117311 main.go:141] libmachine: (addons-106432)   <os>
	I0203 10:33:25.622566  117311 main.go:141] libmachine: (addons-106432)     <type>hvm</type>
	I0203 10:33:25.622576  117311 main.go:141] libmachine: (addons-106432)     <boot dev='cdrom'/>
	I0203 10:33:25.622587  117311 main.go:141] libmachine: (addons-106432)     <boot dev='hd'/>
	I0203 10:33:25.622596  117311 main.go:141] libmachine: (addons-106432)     <bootmenu enable='no'/>
	I0203 10:33:25.622606  117311 main.go:141] libmachine: (addons-106432)   </os>
	I0203 10:33:25.622616  117311 main.go:141] libmachine: (addons-106432)   <devices>
	I0203 10:33:25.622632  117311 main.go:141] libmachine: (addons-106432)     <disk type='file' device='cdrom'>
	I0203 10:33:25.622655  117311 main.go:141] libmachine: (addons-106432)       <source file='/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/boot2docker.iso'/>
	I0203 10:33:25.622666  117311 main.go:141] libmachine: (addons-106432)       <target dev='hdc' bus='scsi'/>
	I0203 10:33:25.622676  117311 main.go:141] libmachine: (addons-106432)       <readonly/>
	I0203 10:33:25.622686  117311 main.go:141] libmachine: (addons-106432)     </disk>
	I0203 10:33:25.622696  117311 main.go:141] libmachine: (addons-106432)     <disk type='file' device='disk'>
	I0203 10:33:25.622710  117311 main.go:141] libmachine: (addons-106432)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0203 10:33:25.622765  117311 main.go:141] libmachine: (addons-106432)       <source file='/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/addons-106432.rawdisk'/>
	I0203 10:33:25.622796  117311 main.go:141] libmachine: (addons-106432)       <target dev='hda' bus='virtio'/>
	I0203 10:33:25.622804  117311 main.go:141] libmachine: (addons-106432)     </disk>
	I0203 10:33:25.622820  117311 main.go:141] libmachine: (addons-106432)     <interface type='network'>
	I0203 10:33:25.622834  117311 main.go:141] libmachine: (addons-106432)       <source network='mk-addons-106432'/>
	I0203 10:33:25.622845  117311 main.go:141] libmachine: (addons-106432)       <model type='virtio'/>
	I0203 10:33:25.622856  117311 main.go:141] libmachine: (addons-106432)     </interface>
	I0203 10:33:25.622866  117311 main.go:141] libmachine: (addons-106432)     <interface type='network'>
	I0203 10:33:25.622878  117311 main.go:141] libmachine: (addons-106432)       <source network='default'/>
	I0203 10:33:25.622886  117311 main.go:141] libmachine: (addons-106432)       <model type='virtio'/>
	I0203 10:33:25.622922  117311 main.go:141] libmachine: (addons-106432)     </interface>
	I0203 10:33:25.622947  117311 main.go:141] libmachine: (addons-106432)     <serial type='pty'>
	I0203 10:33:25.622960  117311 main.go:141] libmachine: (addons-106432)       <target port='0'/>
	I0203 10:33:25.622968  117311 main.go:141] libmachine: (addons-106432)     </serial>
	I0203 10:33:25.622979  117311 main.go:141] libmachine: (addons-106432)     <console type='pty'>
	I0203 10:33:25.622992  117311 main.go:141] libmachine: (addons-106432)       <target type='serial' port='0'/>
	I0203 10:33:25.623004  117311 main.go:141] libmachine: (addons-106432)     </console>
	I0203 10:33:25.623014  117311 main.go:141] libmachine: (addons-106432)     <rng model='virtio'>
	I0203 10:33:25.623027  117311 main.go:141] libmachine: (addons-106432)       <backend model='random'>/dev/random</backend>
	I0203 10:33:25.623039  117311 main.go:141] libmachine: (addons-106432)     </rng>
	I0203 10:33:25.623050  117311 main.go:141] libmachine: (addons-106432)     
	I0203 10:33:25.623057  117311 main.go:141] libmachine: (addons-106432)     
	I0203 10:33:25.623064  117311 main.go:141] libmachine: (addons-106432)   </devices>
	I0203 10:33:25.623074  117311 main.go:141] libmachine: (addons-106432) </domain>
	I0203 10:33:25.623089  117311 main.go:141] libmachine: (addons-106432) 
	I0203 10:33:25.630042  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:0a:9a:65 in network default
	I0203 10:33:25.630549  117311 main.go:141] libmachine: (addons-106432) starting domain...
	I0203 10:33:25.630569  117311 main.go:141] libmachine: (addons-106432) ensuring networks are active...
	I0203 10:33:25.630577  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:25.631138  117311 main.go:141] libmachine: (addons-106432) Ensuring network default is active
	I0203 10:33:25.631483  117311 main.go:141] libmachine: (addons-106432) Ensuring network mk-addons-106432 is active
	I0203 10:33:25.631997  117311 main.go:141] libmachine: (addons-106432) getting domain XML...
	I0203 10:33:25.632673  117311 main.go:141] libmachine: (addons-106432) creating domain...
	I0203 10:33:27.019409  117311 main.go:141] libmachine: (addons-106432) waiting for IP...
	I0203 10:33:27.020258  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:27.020680  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:27.020739  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:27.020674  117333 retry.go:31] will retry after 237.724891ms: waiting for domain to come up
	I0203 10:33:27.260224  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:27.260581  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:27.260611  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:27.260540  117333 retry.go:31] will retry after 285.862095ms: waiting for domain to come up
	I0203 10:33:27.548310  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:27.548734  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:27.548765  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:27.548713  117333 retry.go:31] will retry after 333.569095ms: waiting for domain to come up
	I0203 10:33:27.884152  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:27.884622  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:27.884650  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:27.884583  117333 retry.go:31] will retry after 468.271131ms: waiting for domain to come up
	I0203 10:33:28.354024  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:28.354410  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:28.354464  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:28.354411  117333 retry.go:31] will retry after 509.379249ms: waiting for domain to come up
	I0203 10:33:28.865065  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:28.865482  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:28.865510  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:28.865423  117333 retry.go:31] will retry after 774.59367ms: waiting for domain to come up
	I0203 10:33:29.641297  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:29.641708  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:29.641735  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:29.641678  117333 retry.go:31] will retry after 958.680486ms: waiting for domain to come up
	I0203 10:33:30.601587  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:30.602129  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:30.602162  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:30.602066  117333 retry.go:31] will retry after 1.303634123s: waiting for domain to come up
	I0203 10:33:31.907677  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:31.908067  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:31.908124  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:31.908064  117333 retry.go:31] will retry after 1.607628396s: waiting for domain to come up
	I0203 10:33:33.518114  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:33.518540  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:33.518563  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:33.518524  117333 retry.go:31] will retry after 1.685305196s: waiting for domain to come up
	I0203 10:33:35.205283  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:35.205626  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:35.205680  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:35.205604  117333 retry.go:31] will retry after 1.805224776s: waiting for domain to come up
	I0203 10:33:37.011965  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:37.012360  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:37.012389  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:37.012342  117333 retry.go:31] will retry after 2.867984523s: waiting for domain to come up
	I0203 10:33:39.881559  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:39.882019  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:39.882051  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:39.881964  117333 retry.go:31] will retry after 3.441598816s: waiting for domain to come up
	I0203 10:33:43.327510  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:43.327937  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find current IP address of domain addons-106432 in network mk-addons-106432
	I0203 10:33:43.327987  117311 main.go:141] libmachine: (addons-106432) DBG | I0203 10:33:43.327903  117333 retry.go:31] will retry after 4.383494251s: waiting for domain to come up
	I0203 10:33:47.715115  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:47.715597  117311 main.go:141] libmachine: (addons-106432) found domain IP: 192.168.39.50
	I0203 10:33:47.715623  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has current primary IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:47.715629  117311 main.go:141] libmachine: (addons-106432) reserving static IP address...
	I0203 10:33:47.715996  117311 main.go:141] libmachine: (addons-106432) DBG | unable to find host DHCP lease matching {name: "addons-106432", mac: "52:54:00:c6:39:49", ip: "192.168.39.50"} in network mk-addons-106432
	I0203 10:33:47.785864  117311 main.go:141] libmachine: (addons-106432) DBG | Getting to WaitForSSH function...
	I0203 10:33:47.785910  117311 main.go:141] libmachine: (addons-106432) reserved static IP address 192.168.39.50 for domain addons-106432
	I0203 10:33:47.785924  117311 main.go:141] libmachine: (addons-106432) waiting for SSH...
	I0203 10:33:47.788233  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:47.788789  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:47.788831  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:47.788986  117311 main.go:141] libmachine: (addons-106432) DBG | Using SSH client type: external
	I0203 10:33:47.789013  117311 main.go:141] libmachine: (addons-106432) DBG | Using SSH private key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa (-rw-------)
	I0203 10:33:47.789047  117311 main.go:141] libmachine: (addons-106432) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0203 10:33:47.789061  117311 main.go:141] libmachine: (addons-106432) DBG | About to run SSH command:
	I0203 10:33:47.789077  117311 main.go:141] libmachine: (addons-106432) DBG | exit 0
	I0203 10:33:47.921774  117311 main.go:141] libmachine: (addons-106432) DBG | SSH cmd err, output: <nil>: 
	I0203 10:33:47.922067  117311 main.go:141] libmachine: (addons-106432) KVM machine creation complete
	I0203 10:33:47.922401  117311 main.go:141] libmachine: (addons-106432) Calling .GetConfigRaw
	I0203 10:33:47.922946  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:33:47.923125  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:33:47.923282  117311 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0203 10:33:47.923300  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:33:47.924482  117311 main.go:141] libmachine: Detecting operating system of created instance...
	I0203 10:33:47.924496  117311 main.go:141] libmachine: Waiting for SSH to be available...
	I0203 10:33:47.924501  117311 main.go:141] libmachine: Getting to WaitForSSH function...
	I0203 10:33:47.924506  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:33:47.926671  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:47.927016  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:47.927045  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:47.927166  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:33:47.927377  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:47.927524  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:47.927665  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:33:47.927811  117311 main.go:141] libmachine: Using SSH client type: native
	I0203 10:33:47.928045  117311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0203 10:33:47.928059  117311 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0203 10:33:48.037184  117311 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 10:33:48.037211  117311 main.go:141] libmachine: Detecting the provisioner...
	I0203 10:33:48.037219  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:33:48.040147  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.040504  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:48.040535  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.040678  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:33:48.040873  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:48.041023  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:48.041144  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:33:48.041313  117311 main.go:141] libmachine: Using SSH client type: native
	I0203 10:33:48.041492  117311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0203 10:33:48.041503  117311 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0203 10:33:48.154337  117311 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0203 10:33:48.154423  117311 main.go:141] libmachine: found compatible host: buildroot
	I0203 10:33:48.154435  117311 main.go:141] libmachine: Provisioning with buildroot...
	I0203 10:33:48.154442  117311 main.go:141] libmachine: (addons-106432) Calling .GetMachineName
	I0203 10:33:48.154678  117311 buildroot.go:166] provisioning hostname "addons-106432"
	I0203 10:33:48.154702  117311 main.go:141] libmachine: (addons-106432) Calling .GetMachineName
	I0203 10:33:48.154896  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:33:48.157356  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.157704  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:48.157735  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.157887  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:33:48.158064  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:48.158249  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:48.158385  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:33:48.158553  117311 main.go:141] libmachine: Using SSH client type: native
	I0203 10:33:48.158777  117311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0203 10:33:48.158800  117311 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-106432 && echo "addons-106432" | sudo tee /etc/hostname
	I0203 10:33:48.282976  117311 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-106432
	
	I0203 10:33:48.283018  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:33:48.285648  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.286018  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:48.286044  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.286208  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:33:48.286413  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:48.286570  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:48.286729  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:33:48.286873  117311 main.go:141] libmachine: Using SSH client type: native
	I0203 10:33:48.287053  117311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0203 10:33:48.287076  117311 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-106432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-106432/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-106432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 10:33:48.406130  117311 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 10:33:48.406170  117311 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20354-109432/.minikube CaCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20354-109432/.minikube}
	I0203 10:33:48.406205  117311 buildroot.go:174] setting up certificates
	I0203 10:33:48.406223  117311 provision.go:84] configureAuth start
	I0203 10:33:48.406243  117311 main.go:141] libmachine: (addons-106432) Calling .GetMachineName
	I0203 10:33:48.406528  117311 main.go:141] libmachine: (addons-106432) Calling .GetIP
	I0203 10:33:48.409186  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.409512  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:48.409542  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.409678  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:33:48.411739  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.412051  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:48.412076  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.412185  117311 provision.go:143] copyHostCerts
	I0203 10:33:48.412259  117311 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem (1078 bytes)
	I0203 10:33:48.412404  117311 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem (1123 bytes)
	I0203 10:33:48.412503  117311 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem (1679 bytes)
	I0203 10:33:48.412557  117311 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem org=jenkins.addons-106432 san=[127.0.0.1 192.168.39.50 addons-106432 localhost minikube]
	I0203 10:33:48.461512  117311 provision.go:177] copyRemoteCerts
	I0203 10:33:48.461567  117311 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 10:33:48.461589  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:33:48.463977  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.464298  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:48.464324  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.464509  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:33:48.464699  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:48.464864  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:33:48.464995  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:33:48.551888  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 10:33:48.574218  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0203 10:33:48.595986  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 10:33:48.617571  117311 provision.go:87] duration metric: took 211.328467ms to configureAuth
	I0203 10:33:48.617599  117311 buildroot.go:189] setting minikube options for container-runtime
	I0203 10:33:48.617761  117311 config.go:182] Loaded profile config "addons-106432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 10:33:48.617839  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:33:48.620530  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.620812  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:48.620838  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.620997  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:33:48.621210  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:48.621404  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:48.621557  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:33:48.621745  117311 main.go:141] libmachine: Using SSH client type: native
	I0203 10:33:48.621910  117311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0203 10:33:48.621924  117311 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0203 10:33:48.846731  117311 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0203 10:33:48.846761  117311 main.go:141] libmachine: Checking connection to Docker...
	I0203 10:33:48.846771  117311 main.go:141] libmachine: (addons-106432) Calling .GetURL
	I0203 10:33:48.848135  117311 main.go:141] libmachine: (addons-106432) DBG | using libvirt version 6000000
	I0203 10:33:48.850362  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.850686  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:48.850720  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.850805  117311 main.go:141] libmachine: Docker is up and running!
	I0203 10:33:48.850827  117311 main.go:141] libmachine: Reticulating splines...
	I0203 10:33:48.850836  117311 client.go:171] duration metric: took 24.196597095s to LocalClient.Create
	I0203 10:33:48.850861  117311 start.go:167] duration metric: took 24.19665836s to libmachine.API.Create "addons-106432"
	I0203 10:33:48.850876  117311 start.go:293] postStartSetup for "addons-106432" (driver="kvm2")
	I0203 10:33:48.850892  117311 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 10:33:48.850914  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:33:48.851143  117311 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 10:33:48.851174  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:33:48.853290  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.853562  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:48.853589  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.853742  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:33:48.853944  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:48.854110  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:33:48.854248  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:33:48.939835  117311 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 10:33:48.943781  117311 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 10:33:48.943806  117311 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/addons for local assets ...
	I0203 10:33:48.943873  117311 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/files for local assets ...
	I0203 10:33:48.943901  117311 start.go:296] duration metric: took 93.014252ms for postStartSetup
	I0203 10:33:48.943935  117311 main.go:141] libmachine: (addons-106432) Calling .GetConfigRaw
	I0203 10:33:48.944492  117311 main.go:141] libmachine: (addons-106432) Calling .GetIP
	I0203 10:33:48.947044  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.947411  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:48.947439  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.947677  117311 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/config.json ...
	I0203 10:33:48.947855  117311 start.go:128] duration metric: took 24.312076627s to createHost
	I0203 10:33:48.947880  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:33:48.950214  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.950516  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:48.950547  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:48.950689  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:33:48.950871  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:48.951027  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:48.951148  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:33:48.951294  117311 main.go:141] libmachine: Using SSH client type: native
	I0203 10:33:48.951505  117311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0203 10:33:48.951519  117311 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 10:33:49.062589  117311 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738578829.037483304
	
	I0203 10:33:49.062614  117311 fix.go:216] guest clock: 1738578829.037483304
	I0203 10:33:49.062621  117311 fix.go:229] Guest: 2025-02-03 10:33:49.037483304 +0000 UTC Remote: 2025-02-03 10:33:48.947866972 +0000 UTC m=+24.410394671 (delta=89.616332ms)
	I0203 10:33:49.062643  117311 fix.go:200] guest clock delta is within tolerance: 89.616332ms
	I0203 10:33:49.062648  117311 start.go:83] releasing machines lock for "addons-106432", held for 24.426949372s
	I0203 10:33:49.062672  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:33:49.062952  117311 main.go:141] libmachine: (addons-106432) Calling .GetIP
	I0203 10:33:49.065539  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:49.065845  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:49.065866  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:49.065992  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:33:49.066582  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:33:49.066773  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:33:49.066868  117311 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 10:33:49.066911  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:33:49.066987  117311 ssh_runner.go:195] Run: cat /version.json
	I0203 10:33:49.067015  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:33:49.069593  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:49.069641  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:49.069927  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:49.069957  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:49.069984  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:49.070017  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:49.070057  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:33:49.070254  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:49.070254  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:33:49.070447  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:33:49.070476  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:33:49.070579  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:33:49.070630  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:33:49.070740  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:33:49.180727  117311 ssh_runner.go:195] Run: systemctl --version
	I0203 10:33:49.187533  117311 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0203 10:33:49.346501  117311 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 10:33:49.352013  117311 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 10:33:49.352081  117311 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 10:33:49.368829  117311 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 10:33:49.368862  117311 start.go:495] detecting cgroup driver to use...
	I0203 10:33:49.368930  117311 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 10:33:49.384366  117311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 10:33:49.397291  117311 docker.go:217] disabling cri-docker service (if available) ...
	I0203 10:33:49.397366  117311 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0203 10:33:49.410145  117311 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0203 10:33:49.422934  117311 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0203 10:33:49.529606  117311 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0203 10:33:49.692229  117311 docker.go:233] disabling docker service ...
	I0203 10:33:49.692319  117311 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0203 10:33:49.707008  117311 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0203 10:33:49.719706  117311 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0203 10:33:49.856246  117311 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0203 10:33:49.970181  117311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0203 10:33:49.984224  117311 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 10:33:50.002398  117311 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0203 10:33:50.002476  117311 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 10:33:50.012539  117311 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0203 10:33:50.012617  117311 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 10:33:50.022708  117311 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 10:33:50.032923  117311 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 10:33:50.043153  117311 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 10:33:50.053497  117311 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 10:33:50.063912  117311 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 10:33:50.080876  117311 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 10:33:50.090815  117311 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 10:33:50.099922  117311 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 10:33:50.099984  117311 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 10:33:50.113683  117311 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 10:33:50.123746  117311 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 10:33:50.243570  117311 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0203 10:33:50.327417  117311 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0203 10:33:50.327519  117311 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0203 10:33:50.332410  117311 start.go:563] Will wait 60s for crictl version
	I0203 10:33:50.332491  117311 ssh_runner.go:195] Run: which crictl
	I0203 10:33:50.336150  117311 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 10:33:50.376488  117311 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0203 10:33:50.376620  117311 ssh_runner.go:195] Run: crio --version
	I0203 10:33:50.408287  117311 ssh_runner.go:195] Run: crio --version
	I0203 10:33:50.436679  117311 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0203 10:33:50.438082  117311 main.go:141] libmachine: (addons-106432) Calling .GetIP
	I0203 10:33:50.440654  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:50.440977  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:33:50.441003  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:33:50.441191  117311 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0203 10:33:50.445047  117311 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 10:33:50.457012  117311 kubeadm.go:883] updating cluster {Name:addons-106432 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-106432 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 10:33:50.457133  117311 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 10:33:50.457187  117311 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 10:33:50.488290  117311 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0203 10:33:50.488387  117311 ssh_runner.go:195] Run: which lz4
	I0203 10:33:50.492162  117311 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0203 10:33:50.496000  117311 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 10:33:50.496030  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0203 10:33:51.704218  117311 crio.go:462] duration metric: took 1.212087764s to copy over tarball
	I0203 10:33:51.704298  117311 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0203 10:33:53.894056  117311 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.189720309s)
	I0203 10:33:53.894090  117311 crio.go:469] duration metric: took 2.189837314s to extract the tarball
	I0203 10:33:53.894101  117311 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0203 10:33:53.930574  117311 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 10:33:53.971063  117311 crio.go:514] all images are preloaded for cri-o runtime.
	I0203 10:33:53.971093  117311 cache_images.go:84] Images are preloaded, skipping loading
	I0203 10:33:53.971102  117311 kubeadm.go:934] updating node { 192.168.39.50 8443 v1.32.1 crio true true} ...
	I0203 10:33:53.971201  117311 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-106432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-106432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 10:33:53.971272  117311 ssh_runner.go:195] Run: crio config
	I0203 10:33:54.019370  117311 cni.go:84] Creating CNI manager for ""
	I0203 10:33:54.019393  117311 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 10:33:54.019404  117311 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 10:33:54.019432  117311 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-106432 NodeName:addons-106432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 10:33:54.019556  117311 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-106432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.50"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 10:33:54.019620  117311 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 10:33:54.029263  117311 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 10:33:54.029327  117311 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 10:33:54.038182  117311 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0203 10:33:54.054144  117311 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 10:33:54.070521  117311 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0203 10:33:54.085701  117311 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0203 10:33:54.089573  117311 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 10:33:54.100956  117311 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 10:33:54.231048  117311 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 10:33:54.246923  117311 certs.go:68] Setting up /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432 for IP: 192.168.39.50
	I0203 10:33:54.246964  117311 certs.go:194] generating shared ca certs ...
	I0203 10:33:54.246988  117311 certs.go:226] acquiring lock for ca certs: {Name:mkceafe81f89678b7cbc2a7f6faab4e784fcb207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:54.247180  117311 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key
	I0203 10:33:54.599669  117311 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt ...
	I0203 10:33:54.599702  117311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt: {Name:mk80c99b59857fb94332900c2345cbbb88287483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:54.599881  117311 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key ...
	I0203 10:33:54.599893  117311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key: {Name:mkaafd39d457aa6a8af41be801b36e999e2b2525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:54.599967  117311 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key
	I0203 10:33:54.641733  117311 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.crt ...
	I0203 10:33:54.641764  117311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.crt: {Name:mk21e0ad137e85f5a973739b01b0ef4eb94e23df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:54.641923  117311 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key ...
	I0203 10:33:54.641933  117311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key: {Name:mk0e5473e24f81c5ba40181e82c4f029103eb4a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:54.642023  117311 certs.go:256] generating profile certs ...
	I0203 10:33:54.642084  117311 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.key
	I0203 10:33:54.642099  117311 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt with IP's: []
	I0203 10:33:54.849600  117311 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt ...
	I0203 10:33:54.849631  117311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: {Name:mkf26acde6bc617cee3a21acc14a649b19359acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:54.849788  117311 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.key ...
	I0203 10:33:54.849799  117311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.key: {Name:mkf309c3c025e2a0c554198e441c1d677d8aba70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:54.849870  117311 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/apiserver.key.e8b17829
	I0203 10:33:54.849888  117311 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/apiserver.crt.e8b17829 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.50]
	I0203 10:33:55.012928  117311 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/apiserver.crt.e8b17829 ...
	I0203 10:33:55.012960  117311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/apiserver.crt.e8b17829: {Name:mk19bf27e2a7a4aab8c109872f7706d2958cbddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:55.013110  117311 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/apiserver.key.e8b17829 ...
	I0203 10:33:55.013123  117311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/apiserver.key.e8b17829: {Name:mk78584c8fd1d7e99179ab88b0eb3bb93287c79a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:55.013192  117311 certs.go:381] copying /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/apiserver.crt.e8b17829 -> /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/apiserver.crt
	I0203 10:33:55.013263  117311 certs.go:385] copying /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/apiserver.key.e8b17829 -> /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/apiserver.key
	I0203 10:33:55.013309  117311 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/proxy-client.key
	I0203 10:33:55.013329  117311 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/proxy-client.crt with IP's: []
	I0203 10:33:55.411890  117311 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/proxy-client.crt ...
	I0203 10:33:55.411933  117311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/proxy-client.crt: {Name:mke9b4616928330b44481c1dd6ef9bfb985317d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:55.412120  117311 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/proxy-client.key ...
	I0203 10:33:55.412133  117311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/proxy-client.key: {Name:mk7a2bc44b920dfabe4a81f2ffd65921984ccddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:55.412301  117311 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 10:33:55.412337  117311 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem (1078 bytes)
	I0203 10:33:55.412362  117311 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem (1123 bytes)
	I0203 10:33:55.412386  117311 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem (1679 bytes)
	I0203 10:33:55.412956  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 10:33:55.439834  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 10:33:55.463989  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 10:33:55.488383  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 10:33:55.511721  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0203 10:33:55.534416  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 10:33:55.557433  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 10:33:55.580327  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 10:33:55.603101  117311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 10:33:55.625216  117311 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 10:33:55.640657  117311 ssh_runner.go:195] Run: openssl version
	I0203 10:33:55.646059  117311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 10:33:55.656116  117311 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 10:33:55.660375  117311 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0203 10:33:55.660432  117311 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 10:33:55.666341  117311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 10:33:55.677193  117311 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 10:33:55.681448  117311 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 10:33:55.681511  117311 kubeadm.go:392] StartCluster: {Name:addons-106432 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-106432 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 10:33:55.681611  117311 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 10:33:55.681669  117311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 10:33:55.716884  117311 cri.go:89] found id: ""
	I0203 10:33:55.716974  117311 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 10:33:55.726531  117311 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 10:33:55.735361  117311 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 10:33:55.744499  117311 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 10:33:55.744518  117311 kubeadm.go:157] found existing configuration files:
	
	I0203 10:33:55.744561  117311 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 10:33:55.752561  117311 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 10:33:55.752623  117311 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 10:33:55.761055  117311 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 10:33:55.769526  117311 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 10:33:55.769597  117311 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 10:33:55.778171  117311 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 10:33:55.786400  117311 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 10:33:55.786453  117311 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 10:33:55.795420  117311 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 10:33:55.803497  117311 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 10:33:55.803562  117311 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 10:33:55.814835  117311 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 10:33:55.886245  117311 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0203 10:33:55.886373  117311 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 10:33:55.984889  117311 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 10:33:55.985043  117311 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 10:33:55.985175  117311 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0203 10:33:55.998592  117311 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 10:33:56.149405  117311 out.go:235]   - Generating certificates and keys ...
	I0203 10:33:56.149524  117311 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 10:33:56.149636  117311 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 10:33:56.285734  117311 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 10:33:56.367800  117311 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0203 10:33:56.625072  117311 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0203 10:33:56.789625  117311 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0203 10:33:56.861027  117311 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0203 10:33:56.861170  117311 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-106432 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0203 10:33:57.040119  117311 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0203 10:33:57.040311  117311 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-106432 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0203 10:33:57.325287  117311 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 10:33:57.567278  117311 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 10:33:57.708149  117311 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0203 10:33:57.708247  117311 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 10:33:57.798636  117311 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 10:33:57.893025  117311 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0203 10:33:58.022286  117311 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 10:33:58.230508  117311 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 10:33:58.438559  117311 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 10:33:58.439027  117311 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 10:33:58.441292  117311 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 10:33:58.443515  117311 out.go:235]   - Booting up control plane ...
	I0203 10:33:58.443631  117311 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 10:33:58.443738  117311 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 10:33:58.443834  117311 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 10:33:58.464696  117311 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 10:33:58.470103  117311 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 10:33:58.470183  117311 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 10:33:58.589892  117311 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0203 10:33:58.590082  117311 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0203 10:33:59.091698  117311 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.916458ms
	I0203 10:33:59.091802  117311 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0203 10:34:04.090426  117311 kubeadm.go:310] [api-check] The API server is healthy after 5.002005918s
	I0203 10:34:04.103726  117311 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0203 10:34:04.124608  117311 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0203 10:34:04.153519  117311 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0203 10:34:04.153740  117311 kubeadm.go:310] [mark-control-plane] Marking the node addons-106432 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0203 10:34:04.171176  117311 kubeadm.go:310] [bootstrap-token] Using token: ffledp.kimldwpigizj0zfs
	I0203 10:34:04.172620  117311 out.go:235]   - Configuring RBAC rules ...
	I0203 10:34:04.172778  117311 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0203 10:34:04.178229  117311 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0203 10:34:04.698514  117311 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0203 10:34:04.704800  117311 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0203 10:34:04.709904  117311 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0203 10:34:04.715970  117311 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0203 10:34:04.744137  117311 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0203 10:34:04.965546  117311 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0203 10:34:05.497964  117311 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0203 10:34:05.498933  117311 kubeadm.go:310] 
	I0203 10:34:05.499012  117311 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0203 10:34:05.499024  117311 kubeadm.go:310] 
	I0203 10:34:05.499124  117311 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0203 10:34:05.499146  117311 kubeadm.go:310] 
	I0203 10:34:05.499168  117311 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0203 10:34:05.499220  117311 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0203 10:34:05.499264  117311 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0203 10:34:05.499271  117311 kubeadm.go:310] 
	I0203 10:34:05.499332  117311 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0203 10:34:05.499341  117311 kubeadm.go:310] 
	I0203 10:34:05.499393  117311 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0203 10:34:05.499403  117311 kubeadm.go:310] 
	I0203 10:34:05.499462  117311 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0203 10:34:05.499554  117311 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0203 10:34:05.499655  117311 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0203 10:34:05.499665  117311 kubeadm.go:310] 
	I0203 10:34:05.499771  117311 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0203 10:34:05.499879  117311 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0203 10:34:05.499891  117311 kubeadm.go:310] 
	I0203 10:34:05.500010  117311 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ffledp.kimldwpigizj0zfs \
	I0203 10:34:05.500154  117311 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3450d2c1e7198f4a696236341eb2ce7f113dc8b7d251cc3e872f7a298c2bac92 \
	I0203 10:34:05.500187  117311 kubeadm.go:310] 	--control-plane 
	I0203 10:34:05.500197  117311 kubeadm.go:310] 
	I0203 10:34:05.500283  117311 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0203 10:34:05.500303  117311 kubeadm.go:310] 
	I0203 10:34:05.500407  117311 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ffledp.kimldwpigizj0zfs \
	I0203 10:34:05.500541  117311 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3450d2c1e7198f4a696236341eb2ce7f113dc8b7d251cc3e872f7a298c2bac92 
	I0203 10:34:05.501305  117311 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 10:34:05.501370  117311 cni.go:84] Creating CNI manager for ""
	I0203 10:34:05.501384  117311 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 10:34:05.502937  117311 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0203 10:34:05.504138  117311 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 10:34:05.514652  117311 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0203 10:34:05.534076  117311 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 10:34:05.534142  117311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 10:34:05.534187  117311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-106432 minikube.k8s.io/updated_at=2025_02_03T10_34_05_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d minikube.k8s.io/name=addons-106432 minikube.k8s.io/primary=true
	I0203 10:34:05.564621  117311 ops.go:34] apiserver oom_adj: -16
	I0203 10:34:05.685431  117311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 10:34:06.186128  117311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 10:34:06.686350  117311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 10:34:07.186511  117311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 10:34:07.686099  117311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 10:34:08.185509  117311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 10:34:08.685897  117311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 10:34:08.766703  117311 kubeadm.go:1113] duration metric: took 3.232613874s to wait for elevateKubeSystemPrivileges
	I0203 10:34:08.766751  117311 kubeadm.go:394] duration metric: took 13.085245593s to StartCluster
	I0203 10:34:08.766779  117311 settings.go:142] acquiring lock: {Name:mk7f08542cc4ae303b222901a9d369cc0753d51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:34:08.766922  117311 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 10:34:08.767281  117311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/kubeconfig: {Name:mkcb7c4c45c6b828504faaa9fea59b0bb0855286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:34:08.767521  117311 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0203 10:34:08.767529  117311 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 10:34:08.767610  117311 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0203 10:34:08.767731  117311 config.go:182] Loaded profile config "addons-106432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 10:34:08.767746  117311 addons.go:69] Setting yakd=true in profile "addons-106432"
	I0203 10:34:08.767758  117311 addons.go:69] Setting inspektor-gadget=true in profile "addons-106432"
	I0203 10:34:08.767778  117311 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-106432"
	I0203 10:34:08.767785  117311 addons.go:238] Setting addon inspektor-gadget=true in "addons-106432"
	I0203 10:34:08.767789  117311 addons.go:69] Setting ingress=true in profile "addons-106432"
	I0203 10:34:08.767800  117311 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-106432"
	I0203 10:34:08.767795  117311 addons.go:69] Setting storage-provisioner=true in profile "addons-106432"
	I0203 10:34:08.767811  117311 addons.go:69] Setting gcp-auth=true in profile "addons-106432"
	I0203 10:34:08.767822  117311 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-106432"
	I0203 10:34:08.767831  117311 mustload.go:65] Loading cluster: addons-106432
	I0203 10:34:08.767833  117311 addons.go:69] Setting ingress-dns=true in profile "addons-106432"
	I0203 10:34:08.767848  117311 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-106432"
	I0203 10:34:08.767858  117311 addons.go:69] Setting registry=true in profile "addons-106432"
	I0203 10:34:08.767865  117311 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-106432"
	I0203 10:34:08.767873  117311 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-106432"
	I0203 10:34:08.767877  117311 addons.go:69] Setting volcano=true in profile "addons-106432"
	I0203 10:34:08.767887  117311 addons.go:238] Setting addon volcano=true in "addons-106432"
	I0203 10:34:08.767903  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.767867  117311 addons.go:238] Setting addon registry=true in "addons-106432"
	I0203 10:34:08.767913  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.767925  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.767985  117311 config.go:182] Loaded profile config "addons-106432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 10:34:08.767822  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.767836  117311 addons.go:69] Setting metrics-server=true in profile "addons-106432"
	I0203 10:34:08.768211  117311 addons.go:238] Setting addon metrics-server=true in "addons-106432"
	I0203 10:34:08.768236  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.768318  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.768323  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.767903  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.768359  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.768365  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.768390  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.768400  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.768409  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.768423  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.768427  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.767802  117311 addons.go:238] Setting addon ingress=true in "addons-106432"
	I0203 10:34:08.767794  117311 addons.go:69] Setting default-storageclass=true in profile "addons-106432"
	I0203 10:34:08.767808  117311 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-106432"
	I0203 10:34:08.768487  117311 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-106432"
	I0203 10:34:08.768491  117311 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-106432"
	I0203 10:34:08.768505  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.768512  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.768613  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.768629  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.767813  117311 addons.go:69] Setting cloud-spanner=true in profile "addons-106432"
	I0203 10:34:08.768682  117311 addons.go:238] Setting addon cloud-spanner=true in "addons-106432"
	I0203 10:34:08.767768  117311 addons.go:238] Setting addon yakd=true in "addons-106432"
	I0203 10:34:08.768707  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.767826  117311 addons.go:238] Setting addon storage-provisioner=true in "addons-106432"
	I0203 10:34:08.768726  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.768833  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.768837  117311 addons.go:69] Setting volumesnapshots=true in profile "addons-106432"
	I0203 10:34:08.768469  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.768852  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.768854  117311 addons.go:238] Setting addon volumesnapshots=true in "addons-106432"
	I0203 10:34:08.767849  117311 addons.go:238] Setting addon ingress-dns=true in "addons-106432"
	I0203 10:34:08.768871  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.768878  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.768889  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.768921  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.768945  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.768980  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.769126  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.769320  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.769351  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.769417  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.768839  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.769489  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.769509  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.769518  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.769848  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.770608  117311 out.go:177] * Verifying Kubernetes components...
	I0203 10:34:08.771944  117311 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 10:34:08.789215  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44813
	I0203 10:34:08.790114  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33519
	I0203 10:34:08.790255  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41013
	I0203 10:34:08.790377  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.790425  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.790471  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.792806  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.792837  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.790432  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.793077  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I0203 10:34:08.793266  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.793378  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.794536  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.794636  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.794646  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.794649  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.794666  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.794672  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.795115  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.795181  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.795202  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.795218  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.795284  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.795303  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.795593  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.795715  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.795723  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.795755  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.795812  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.796487  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.796750  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.796968  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.798356  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36443
	I0203 10:34:08.799037  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.800992  117311 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-106432"
	I0203 10:34:08.801031  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.801413  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.801449  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.802312  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.802329  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.802392  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.802740  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.802757  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.802894  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.819628  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33923
	I0203 10:34:08.820485  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.821174  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.821200  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.821641  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.822272  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.822318  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.824218  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43313
	I0203 10:34:08.824838  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.825429  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.825447  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.825794  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.826408  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.826454  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.828130  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43839
	I0203 10:34:08.828734  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.829344  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.829361  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.829737  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.830350  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.830382  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.832171  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39177
	I0203 10:34:08.836089  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.836205  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41801
	I0203 10:34:08.836246  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35773
	I0203 10:34:08.836841  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
	I0203 10:34:08.837304  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.837323  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.837355  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.837805  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.837827  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.837850  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.838566  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.838613  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.838654  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.838689  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.839023  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.839574  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.839603  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.842342  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.842480  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.842845  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.842865  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.843206  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.843766  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.843811  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.844196  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.844213  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.846901  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I0203 10:34:08.846922  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.847655  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.847704  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.847959  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.848573  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.848600  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.848974  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.849592  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.849634  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.856489  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0203 10:34:08.857316  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.857924  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.857949  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.858411  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.858970  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.859017  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.866289  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I0203 10:34:08.866886  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.867478  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.867509  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.867946  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.868163  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.868582  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0203 10:34:08.868777  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33721
	I0203 10:34:08.869122  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.869196  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38549
	I0203 10:34:08.869736  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.869879  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.869896  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.870449  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.870472  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.870479  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.870680  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.870887  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.870980  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.871423  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.872561  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.873133  117311 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0203 10:34:08.873559  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.874362  117311 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0203 10:34:08.874381  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0203 10:34:08.874402  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.875107  117311 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0203 10:34:08.875423  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.876316  117311 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0203 10:34:08.876334  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0203 10:34:08.876355  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.876603  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.876619  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.877389  117311 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0203 10:34:08.877933  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.878799  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.878841  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.879081  117311 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0203 10:34:08.879096  117311 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0203 10:34:08.879116  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.879206  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36485
	I0203 10:34:08.879602  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42225
	I0203 10:34:08.880026  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I0203 10:34:08.880204  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.880287  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.880353  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.880378  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36329
	I0203 10:34:08.881009  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.881026  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.881079  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.881651  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.881704  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.881722  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.881707  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.881825  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.882181  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.882239  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.882353  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.882365  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.882433  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0203 10:34:08.882562  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.882578  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.882843  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.883008  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.884253  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.884266  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.884328  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.884345  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.884516  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.884618  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:08.884831  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.884876  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35537
	I0203 10:34:08.885035  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.885688  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.885719  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.885962  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.886169  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.886503  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.886551  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.886581  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.886788  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.886898  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.887125  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.887143  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.887285  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.887290  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.887440  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:08.887528  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:08.887724  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.887802  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.887927  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.887931  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:08.887994  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:08.888274  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.888281  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:08.888309  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:08.888317  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:08.888326  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:08.888332  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:08.888578  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:08.888606  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:08.888614  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	W0203 10:34:08.888699  117311 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0203 10:34:08.888913  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.888928  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.889070  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.889081  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.889211  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.889278  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.889503  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.889990  117311 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0203 10:34:08.890135  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.890158  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.891196  117311 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0203 10:34:08.892047  117311 addons.go:238] Setting addon default-storageclass=true in "addons-106432"
	I0203 10:34:08.892094  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:08.892463  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.892507  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.892520  117311 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0203 10:34:08.892537  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0203 10:34:08.892557  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.892644  117311 out.go:177]   - Using image docker.io/registry:2.8.3
	I0203 10:34:08.894104  117311 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0203 10:34:08.894133  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0203 10:34:08.894154  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.895194  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I0203 10:34:08.896630  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.896900  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.897347  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.897515  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.897796  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.897811  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.898477  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.898480  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.898677  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.899674  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.899717  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I0203 10:34:08.899682  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.899749  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.899771  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.899945  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.899991  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.900229  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.900265  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.900310  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.900852  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:08.900983  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.900997  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.900856  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:08.901371  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.901559  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.902272  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.903340  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.904576  117311 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0203 10:34:08.905262  117311 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0203 10:34:08.905899  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40691
	I0203 10:34:08.906522  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43895
	I0203 10:34:08.906502  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.906876  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.906902  117311 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0203 10:34:08.906919  117311 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0203 10:34:08.906941  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.907525  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.907544  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.907584  117311 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0203 10:34:08.907781  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.907802  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.909534  117311 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0203 10:34:08.910081  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.910458  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.910480  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33435
	I0203 10:34:08.911481  117311 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0203 10:34:08.911654  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.911720  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0203 10:34:08.912255  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.912272  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.912345  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.912417  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.912959  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.913080  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.913089  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.913139  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.913473  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.913513  117311 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0203 10:34:08.913709  117311 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0203 10:34:08.914037  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.914225  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.914385  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.914691  117311 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0203 10:34:08.914712  117311 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0203 10:34:08.914732  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.914902  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.915677  117311 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0203 10:34:08.915960  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.916610  117311 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0203 10:34:08.916972  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.916992  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.917018  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.917190  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.917308  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.917419  117311 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0203 10:34:08.917466  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:08.917580  117311 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0203 10:34:08.917602  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0203 10:34:08.917626  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.918541  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.918953  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.919392  117311 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0203 10:34:08.920035  117311 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 10:34:08.920642  117311 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0203 10:34:08.920671  117311 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0203 10:34:08.920692  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.920761  117311 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0203 10:34:08.921084  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.921084  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.921302  117311 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 10:34:08.921318  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 10:34:08.921334  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.921536  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.921566  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.921592  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.921604  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.921782  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.921957  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.922162  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.922031  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.922402  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.922408  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:08.922562  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.922702  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:08.923358  117311 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0203 10:34:08.924539  117311 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0203 10:34:08.925162  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.925187  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.925577  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.925612  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.925765  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.925792  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.925830  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.926120  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.926227  117311 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0203 10:34:08.926244  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0203 10:34:08.926260  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.926415  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.926540  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.926623  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:08.926907  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38903
	I0203 10:34:08.927146  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.927326  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.927508  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:08.927543  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.928193  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.928216  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.929140  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.929705  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.929829  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.930207  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.930245  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.930540  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.930745  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.930924  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.931069  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:08.931600  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.933192  117311 out.go:177]   - Using image docker.io/busybox:stable
	I0203 10:34:08.933434  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0203 10:34:08.934042  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.934632  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.934646  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.934979  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.935037  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42877
	I0203 10:34:08.935185  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.935382  117311 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0203 10:34:08.935390  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:08.935888  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.935904  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.936187  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.936698  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:08.936708  117311 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0203 10:34:08.936715  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:08.936724  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0203 10:34:08.936744  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.936823  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.938256  117311 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0203 10:34:08.939408  117311 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0203 10:34:08.939432  117311 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0203 10:34:08.939450  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.939641  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.940088  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.940104  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.940269  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.940432  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.940539  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.940636  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	W0203 10:34:08.941304  117311 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0203 10:34:08.941332  117311 retry.go:31] will retry after 195.08816ms: ssh: handshake failed: EOF
	I0203 10:34:08.942232  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.942569  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.942588  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.942748  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.942936  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.943066  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.943209  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:08.953197  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34863
	I0203 10:34:08.953754  117311 main.go:141] libmachine: () Calling .GetVersion
	W0203 10:34:08.953907  117311 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38158->192.168.39.50:22: read: connection reset by peer
	I0203 10:34:08.953933  117311 retry.go:31] will retry after 249.078265ms: ssh: handshake failed: read tcp 192.168.39.1:38158->192.168.39.50:22: read: connection reset by peer
	I0203 10:34:08.954352  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:08.954372  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:08.954776  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:08.954984  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:08.957026  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:08.957270  117311 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 10:34:08.957318  117311 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 10:34:08.957352  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:08.960467  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.960984  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:08.961003  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:08.961185  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:08.961355  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:08.961454  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:08.961572  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:09.021711  117311 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0203 10:34:09.030656  117311 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 10:34:09.266084  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 10:34:09.270595  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0203 10:34:09.278754  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0203 10:34:09.340382  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0203 10:34:09.363175  117311 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0203 10:34:09.363212  117311 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0203 10:34:09.379958  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0203 10:34:09.389144  117311 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0203 10:34:09.389178  117311 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0203 10:34:09.399556  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 10:34:09.420133  117311 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0203 10:34:09.420160  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0203 10:34:09.454950  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0203 10:34:09.486821  117311 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0203 10:34:09.486863  117311 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0203 10:34:09.487263  117311 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0203 10:34:09.487285  117311 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0203 10:34:09.515760  117311 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0203 10:34:09.515785  117311 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0203 10:34:09.619217  117311 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0203 10:34:09.619249  117311 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0203 10:34:09.640839  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0203 10:34:09.656216  117311 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0203 10:34:09.656246  117311 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0203 10:34:09.772495  117311 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0203 10:34:09.772534  117311 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0203 10:34:09.788687  117311 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0203 10:34:09.788713  117311 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0203 10:34:09.813603  117311 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0203 10:34:09.813630  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0203 10:34:09.821468  117311 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0203 10:34:09.821491  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0203 10:34:09.929185  117311 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0203 10:34:09.929213  117311 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0203 10:34:09.930882  117311 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0203 10:34:09.930905  117311 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0203 10:34:09.979035  117311 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0203 10:34:09.979071  117311 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0203 10:34:10.016495  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0203 10:34:10.020179  117311 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0203 10:34:10.020207  117311 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0203 10:34:10.031098  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0203 10:34:10.123072  117311 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0203 10:34:10.123109  117311 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0203 10:34:10.157117  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0203 10:34:10.173286  117311 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0203 10:34:10.173329  117311 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0203 10:34:10.255773  117311 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0203 10:34:10.255802  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0203 10:34:10.329661  117311 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0203 10:34:10.329690  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0203 10:34:10.369190  117311 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0203 10:34:10.369216  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0203 10:34:10.485826  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0203 10:34:10.527921  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0203 10:34:10.599407  117311 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0203 10:34:10.599437  117311 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0203 10:34:10.880374  117311 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0203 10:34:10.880405  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0203 10:34:10.990950  117311 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.969182935s)
	I0203 10:34:10.990994  117311 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0203 10:34:10.991020  117311 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.960317465s)
	I0203 10:34:10.991964  117311 node_ready.go:35] waiting up to 6m0s for node "addons-106432" to be "Ready" ...
	I0203 10:34:10.996080  117311 node_ready.go:49] node "addons-106432" has status "Ready":"True"
	I0203 10:34:10.996102  117311 node_ready.go:38] duration metric: took 4.105952ms for node "addons-106432" to be "Ready" ...
	I0203 10:34:10.996115  117311 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 10:34:11.005870  117311 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:11.145667  117311 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0203 10:34:11.145708  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0203 10:34:11.412452  117311 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0203 10:34:11.412491  117311 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0203 10:34:11.505982  117311 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-106432" context rescaled to 1 replicas
	I0203 10:34:11.703891  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0203 10:34:13.026970  117311 pod_ready.go:103] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"False"
	I0203 10:34:13.055993  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.789861854s)
	I0203 10:34:13.056026  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.785405902s)
	I0203 10:34:13.056077  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.056095  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.056091  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.777294126s)
	I0203 10:34:13.056135  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.056142  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.056154  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.056160  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.056218  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.715795309s)
	I0203 10:34:13.056245  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.676256053s)
	I0203 10:34:13.056265  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.056276  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.056281  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.656700405s)
	I0203 10:34:13.056250  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.056316  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.056306  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.056429  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.056523  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:13.056553  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.056566  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:13.056568  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.056594  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:13.056596  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:13.056609  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.056619  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.056621  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.056629  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:13.056637  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.056644  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.056574  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.056667  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.056688  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:13.058078  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:13.058105  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.058112  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:13.058357  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.058379  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:13.058644  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.058656  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:13.058664  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.058671  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.058874  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.058919  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.058938  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:13.058942  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:13.058953  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.058960  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.058892  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:13.060046  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:13.060070  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.060085  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:13.060161  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.060220  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:13.060247  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.060264  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.061012  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.061025  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:13.061030  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:13.061026  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:13.061040  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.061053  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:13.082445  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:13.082464  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:13.082771  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:13.082791  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:15.570101  117311 pod_ready.go:103] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"False"
	I0203 10:34:15.770279  117311 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0203 10:34:15.770319  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:15.773564  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:15.773951  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:15.773975  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:15.774260  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:15.774459  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:15.774700  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:15.774908  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:16.021881  117311 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0203 10:34:16.194156  117311 addons.go:238] Setting addon gcp-auth=true in "addons-106432"
	I0203 10:34:16.194226  117311 host.go:66] Checking if "addons-106432" exists ...
	I0203 10:34:16.194564  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:16.194626  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:16.210916  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0203 10:34:16.211401  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:16.211966  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:16.211997  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:16.212350  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:16.213008  117311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:34:16.213060  117311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:34:16.229203  117311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0203 10:34:16.229690  117311 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:34:16.230294  117311 main.go:141] libmachine: Using API Version  1
	I0203 10:34:16.230321  117311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:34:16.230738  117311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:34:16.230980  117311 main.go:141] libmachine: (addons-106432) Calling .GetState
	I0203 10:34:16.232866  117311 main.go:141] libmachine: (addons-106432) Calling .DriverName
	I0203 10:34:16.233094  117311 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0203 10:34:16.233122  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHHostname
	I0203 10:34:16.236078  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:16.236546  117311 main.go:141] libmachine: (addons-106432) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:39:49", ip: ""} in network mk-addons-106432: {Iface:virbr1 ExpiryTime:2025-02-03 11:33:39 +0000 UTC Type:0 Mac:52:54:00:c6:39:49 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-106432 Clientid:01:52:54:00:c6:39:49}
	I0203 10:34:16.236578  117311 main.go:141] libmachine: (addons-106432) DBG | domain addons-106432 has defined IP address 192.168.39.50 and MAC address 52:54:00:c6:39:49 in network mk-addons-106432
	I0203 10:34:16.236773  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHPort
	I0203 10:34:16.237027  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHKeyPath
	I0203 10:34:16.237215  117311 main.go:141] libmachine: (addons-106432) Calling .GetSSHUsername
	I0203 10:34:16.237413  117311 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/addons-106432/id_rsa Username:docker}
	I0203 10:34:16.584843  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.129843317s)
	I0203 10:34:16.584887  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.94400828s)
	I0203 10:34:16.584909  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:16.584924  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.584933  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:16.584946  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.584970  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.568434941s)
	I0203 10:34:16.585005  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:16.585016  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.585030  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.553902457s)
	I0203 10:34:16.585061  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:16.585073  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.585112  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.427948072s)
	I0203 10:34:16.585148  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:16.585161  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.585205  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.099331934s)
	I0203 10:34:16.585224  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:16.585233  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.585267  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:16.585296  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.585297  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.585310  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.585319  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.585323  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:16.585328  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.585331  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.585336  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:16.585344  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.585347  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:16.585363  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.057406983s)
	I0203 10:34:16.585303  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.585377  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.585381  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:16.585385  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.585388  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.585394  117311 main.go:141] libmachine: Making call to close driver server
	W0203 10:34:16.585398  117311 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0203 10:34:16.585423  117311 retry.go:31] will retry after 183.852933ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0203 10:34:16.585400  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.586356  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:16.586400  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.586408  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.586416  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:16.586424  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.586471  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:16.586490  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.586496  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.586505  117311 addons.go:479] Verifying addon ingress=true in "addons-106432"
	I0203 10:34:16.586705  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:16.586729  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.586735  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.587185  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:16.587215  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:16.587245  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.587257  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.587266  117311 addons.go:479] Verifying addon registry=true in "addons-106432"
	I0203 10:34:16.587394  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.587410  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.587574  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.587581  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.587586  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:16.587591  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.587663  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.587669  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.588411  117311 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-106432 service yakd-dashboard -n yakd-dashboard
	
	I0203 10:34:16.588528  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:16.588552  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.589414  117311 out.go:177] * Verifying ingress addon...
	I0203 10:34:16.589417  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.589428  117311 addons.go:479] Verifying addon metrics-server=true in "addons-106432"
	I0203 10:34:16.591025  117311 out.go:177] * Verifying registry addon...
	I0203 10:34:16.591762  117311 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0203 10:34:16.593151  117311 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0203 10:34:16.611275  117311 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0203 10:34:16.611296  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:16.612036  117311 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0203 10:34:16.612053  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:16.631761  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:16.631786  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:16.632059  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:16.632080  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:16.632119  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:16.770322  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0203 10:34:17.096872  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:17.097280  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:17.599839  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:17.599881  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:17.925613  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.221662135s)
	I0203 10:34:17.925659  117311 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.692547472s)
	I0203 10:34:17.925667  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:17.925682  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:17.926039  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:17.926130  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:17.926148  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:17.926158  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:17.926170  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:17.926370  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:17.926384  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:17.926413  117311 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-106432"
	I0203 10:34:17.926420  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:17.927092  117311 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0203 10:34:17.927852  117311 out.go:177] * Verifying csi-hostpath-driver addon...
	I0203 10:34:17.929468  117311 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0203 10:34:17.930477  117311 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0203 10:34:17.930551  117311 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0203 10:34:17.930576  117311 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0203 10:34:17.958174  117311 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0203 10:34:17.958205  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:18.030115  117311 pod_ready.go:103] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"False"
	I0203 10:34:18.039306  117311 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0203 10:34:18.039333  117311 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0203 10:34:18.110714  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:18.111105  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:18.142611  117311 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0203 10:34:18.142637  117311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0203 10:34:18.217870  117311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0203 10:34:18.434941  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:18.596872  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:18.597314  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:18.604481  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.834104137s)
	I0203 10:34:18.604530  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:18.604550  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:18.604881  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:18.604896  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:18.604905  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:18.604912  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:18.604917  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:18.605138  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:18.605154  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:18.605157  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:18.942935  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:19.103510  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:19.105337  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:19.436927  117311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.219006684s)
	I0203 10:34:19.436986  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:19.437006  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:19.437319  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:19.437364  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:19.437382  117311 main.go:141] libmachine: Making call to close driver server
	I0203 10:34:19.437394  117311 main.go:141] libmachine: (addons-106432) Calling .Close
	I0203 10:34:19.437624  117311 main.go:141] libmachine: (addons-106432) DBG | Closing plugin on server side
	I0203 10:34:19.437645  117311 main.go:141] libmachine: Successfully made call to close driver server
	I0203 10:34:19.437660  117311 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 10:34:19.438565  117311 addons.go:479] Verifying addon gcp-auth=true in "addons-106432"
	I0203 10:34:19.440038  117311 out.go:177] * Verifying gcp-auth addon...
	I0203 10:34:19.441892  117311 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0203 10:34:19.462579  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:19.466729  117311 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0203 10:34:19.466746  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:19.597402  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:19.597615  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:19.937185  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:19.945907  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:20.185354  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:20.185485  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:20.435224  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:20.445042  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:20.511881  117311 pod_ready.go:103] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"False"
	I0203 10:34:20.600077  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:20.600489  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:20.935221  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:20.944856  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:21.100320  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:21.100326  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:21.435628  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:21.445140  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:21.596126  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:21.597249  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:21.936008  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:21.945572  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:22.096894  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:22.098656  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:22.435869  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:22.445262  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:22.512200  117311 pod_ready.go:103] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"False"
	I0203 10:34:22.595668  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:22.596868  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:22.935492  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:22.945469  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:23.096955  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:23.097351  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:23.435692  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:23.445173  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:23.597335  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:23.597372  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:24.140251  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:24.141406  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:24.141617  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:24.141866  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:24.437145  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:24.446156  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:24.596712  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:24.599089  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:24.934403  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:24.945630  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:25.012539  117311 pod_ready.go:103] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"False"
	I0203 10:34:25.096429  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:25.097303  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:25.435474  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:25.445257  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:25.600941  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:25.601296  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:25.935368  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:25.945065  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:26.096297  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:26.096938  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:26.436093  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:26.445188  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:26.597271  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:26.597900  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:26.935065  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:26.944978  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:27.096826  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:27.097421  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:27.435417  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:27.445653  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:27.511956  117311 pod_ready.go:103] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"False"
	I0203 10:34:27.596773  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:27.596997  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:27.935245  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:27.945078  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:28.096673  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:28.097924  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:28.436226  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:28.445383  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:28.596213  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:28.598943  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:28.935443  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:28.945429  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:29.096293  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:29.097136  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:29.435563  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:29.445588  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:29.597724  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:29.598956  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:29.935214  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:29.946521  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:30.012820  117311 pod_ready.go:103] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"False"
	I0203 10:34:30.096058  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:30.098714  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:30.434859  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:30.447344  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:30.595990  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:30.598371  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:30.934759  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:30.946207  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:31.097323  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:31.097503  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:31.437320  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:31.445956  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:31.596563  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:31.597753  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:31.934482  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:31.945423  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:32.100612  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:32.101031  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:32.574160  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:32.574903  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:32.575443  117311 pod_ready.go:103] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"False"
	I0203 10:34:32.682150  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:32.682709  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:32.935370  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:32.945820  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:33.096629  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:33.097224  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:33.435870  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:33.445402  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:33.596307  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:33.596504  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:33.935398  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:33.944883  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:34.096978  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:34.097455  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:34.435176  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:34.445044  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:34.934698  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:34.935151  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:34.935935  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:34.945378  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:35.012942  117311 pod_ready.go:103] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"False"
	I0203 10:34:35.095621  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:35.097398  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:35.435323  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:35.445410  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:35.597152  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:35.597509  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:35.934944  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:35.944449  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:36.095887  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:36.097547  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:36.435390  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:36.444860  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:36.595719  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:36.596901  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:36.935127  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:37.008676  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:37.096177  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:37.097179  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:37.435254  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:37.444879  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:37.513089  117311 pod_ready.go:103] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"False"
	I0203 10:34:37.598676  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:37.602548  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:37.935267  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:37.944927  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:38.096073  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:38.097447  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:38.435556  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:38.444978  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:38.601073  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:38.602807  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:38.936894  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:38.945266  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:39.097478  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:39.097631  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:39.435990  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:39.445220  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:39.595822  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:39.599191  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:39.935292  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:39.947238  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:40.012746  117311 pod_ready.go:103] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"False"
	I0203 10:34:40.096596  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:40.096614  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:40.436327  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:40.445298  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:40.596214  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:40.596577  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:40.935199  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:40.945613  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:41.096469  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:41.097378  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:41.436104  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:41.446013  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:41.597624  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:41.597766  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:41.935216  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:41.945075  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:42.095676  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:42.097446  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:42.435794  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:42.445561  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:42.511596  117311 pod_ready.go:93] pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace has status "Ready":"True"
	I0203 10:34:42.511620  117311 pod_ready.go:82] duration metric: took 31.505717259s for pod "amd-gpu-device-plugin-pp748" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:42.511629  117311 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cnpr9" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:42.513477  117311 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-cnpr9" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-cnpr9" not found
	I0203 10:34:42.513496  117311 pod_ready.go:82] duration metric: took 1.861057ms for pod "coredns-668d6bf9bc-cnpr9" in "kube-system" namespace to be "Ready" ...
	E0203 10:34:42.513504  117311 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-cnpr9" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-cnpr9" not found
	I0203 10:34:42.513511  117311 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-ds947" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:42.517750  117311 pod_ready.go:93] pod "coredns-668d6bf9bc-ds947" in "kube-system" namespace has status "Ready":"True"
	I0203 10:34:42.517768  117311 pod_ready.go:82] duration metric: took 4.251069ms for pod "coredns-668d6bf9bc-ds947" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:42.517777  117311 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-106432" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:42.522359  117311 pod_ready.go:93] pod "etcd-addons-106432" in "kube-system" namespace has status "Ready":"True"
	I0203 10:34:42.522376  117311 pod_ready.go:82] duration metric: took 4.594298ms for pod "etcd-addons-106432" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:42.522384  117311 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-106432" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:42.526371  117311 pod_ready.go:93] pod "kube-apiserver-addons-106432" in "kube-system" namespace has status "Ready":"True"
	I0203 10:34:42.526391  117311 pod_ready.go:82] duration metric: took 4.000519ms for pod "kube-apiserver-addons-106432" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:42.526402  117311 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-106432" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:42.595507  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:42.596591  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:42.710222  117311 pod_ready.go:93] pod "kube-controller-manager-addons-106432" in "kube-system" namespace has status "Ready":"True"
	I0203 10:34:42.710247  117311 pod_ready.go:82] duration metric: took 183.838591ms for pod "kube-controller-manager-addons-106432" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:42.710258  117311 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-shbn7" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:42.935528  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:42.945462  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:43.097102  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:43.097683  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:43.109765  117311 pod_ready.go:93] pod "kube-proxy-shbn7" in "kube-system" namespace has status "Ready":"True"
	I0203 10:34:43.109789  117311 pod_ready.go:82] duration metric: took 399.524685ms for pod "kube-proxy-shbn7" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:43.109801  117311 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-106432" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:43.435504  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:43.446367  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:43.509962  117311 pod_ready.go:93] pod "kube-scheduler-addons-106432" in "kube-system" namespace has status "Ready":"True"
	I0203 10:34:43.509990  117311 pod_ready.go:82] duration metric: took 400.182107ms for pod "kube-scheduler-addons-106432" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:43.510023  117311 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xfb74" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:43.595628  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:43.597254  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:43.909862  117311 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xfb74" in "kube-system" namespace has status "Ready":"True"
	I0203 10:34:43.909895  117311 pod_ready.go:82] duration metric: took 399.862048ms for pod "nvidia-device-plugin-daemonset-xfb74" in "kube-system" namespace to be "Ready" ...
	I0203 10:34:43.909907  117311 pod_ready.go:39] duration metric: took 32.913779211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 10:34:43.909930  117311 api_server.go:52] waiting for apiserver process to appear ...
	I0203 10:34:43.910015  117311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 10:34:43.932069  117311 api_server.go:72] duration metric: took 35.164507128s to wait for apiserver process to appear ...
	I0203 10:34:43.932105  117311 api_server.go:88] waiting for apiserver healthz status ...
	I0203 10:34:43.932129  117311 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0203 10:34:43.938229  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:43.938605  117311 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0203 10:34:43.939688  117311 api_server.go:141] control plane version: v1.32.1
	I0203 10:34:43.939714  117311 api_server.go:131] duration metric: took 7.601202ms to wait for apiserver health ...
	I0203 10:34:43.939723  117311 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 10:34:43.946362  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:44.096198  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:44.096778  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:44.115243  117311 system_pods.go:59] 18 kube-system pods found
	I0203 10:34:44.115271  117311 system_pods.go:61] "amd-gpu-device-plugin-pp748" [359ec12e-0b07-4d21-a1d1-00ff3f23757e] Running
	I0203 10:34:44.115276  117311 system_pods.go:61] "coredns-668d6bf9bc-ds947" [f49da9d2-8a43-4611-8117-bcac60ee8f3e] Running
	I0203 10:34:44.115285  117311 system_pods.go:61] "csi-hostpath-attacher-0" [6eb9211b-f0cf-4734-8317-66461c7fbde0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0203 10:34:44.115292  117311 system_pods.go:61] "csi-hostpath-resizer-0" [2517fd36-efc5-4905-99a5-595749929505] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0203 10:34:44.115300  117311 system_pods.go:61] "csi-hostpathplugin-pftt6" [ab5452ba-a0f2-4e7a-b587-4f24e966f225] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0203 10:34:44.115306  117311 system_pods.go:61] "etcd-addons-106432" [a041b081-4ad2-4165-beb8-db59525c6f70] Running
	I0203 10:34:44.115312  117311 system_pods.go:61] "kube-apiserver-addons-106432" [94c4f755-76c5-4e36-a453-25160c064573] Running
	I0203 10:34:44.115317  117311 system_pods.go:61] "kube-controller-manager-addons-106432" [39da9b69-d020-4e80-876d-dc99beebae12] Running
	I0203 10:34:44.115323  117311 system_pods.go:61] "kube-ingress-dns-minikube" [6fe5a1f1-c97d-4944-a10e-0e441b6cc2b7] Running
	I0203 10:34:44.115330  117311 system_pods.go:61] "kube-proxy-shbn7" [c2d63525-8e0e-4aee-90ce-4cf830200aff] Running
	I0203 10:34:44.115335  117311 system_pods.go:61] "kube-scheduler-addons-106432" [662feab1-83f2-483e-93ff-72cf0d1b60dc] Running
	I0203 10:34:44.115343  117311 system_pods.go:61] "metrics-server-7fbb699795-cb689" [e4cd7001-8f29-40aa-8ff7-fed7f02eb492] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0203 10:34:44.115350  117311 system_pods.go:61] "nvidia-device-plugin-daemonset-xfb74" [0890e46b-b717-401e-a098-3ee68502198f] Running
	I0203 10:34:44.115362  117311 system_pods.go:61] "registry-6c88467877-ftnp8" [ffc00625-b39b-43ae-ae8e-ea7a8936124f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0203 10:34:44.115371  117311 system_pods.go:61] "registry-proxy-hlmqp" [5bb931f2-dd11-41dc-9467-c7cc823a3860] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0203 10:34:44.115381  117311 system_pods.go:61] "snapshot-controller-68b874b76f-9dsjh" [3e4fcb7b-4c53-41cc-8fbb-98523e448429] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0203 10:34:44.115387  117311 system_pods.go:61] "snapshot-controller-68b874b76f-brqcx" [df097690-2c76-499d-973f-f38247bc10df] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0203 10:34:44.115391  117311 system_pods.go:61] "storage-provisioner" [45166f6e-176b-4ead-ab8f-7b3b0f192ea7] Running
	I0203 10:34:44.115398  117311 system_pods.go:74] duration metric: took 175.669273ms to wait for pod list to return data ...
	I0203 10:34:44.115408  117311 default_sa.go:34] waiting for default service account to be created ...
	I0203 10:34:44.309737  117311 default_sa.go:45] found service account: "default"
	I0203 10:34:44.309765  117311 default_sa.go:55] duration metric: took 194.350681ms for default service account to be created ...
	I0203 10:34:44.309777  117311 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 10:34:44.435296  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:44.446014  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:44.514916  117311 system_pods.go:86] 18 kube-system pods found
	I0203 10:34:44.514947  117311 system_pods.go:89] "amd-gpu-device-plugin-pp748" [359ec12e-0b07-4d21-a1d1-00ff3f23757e] Running
	I0203 10:34:44.514955  117311 system_pods.go:89] "coredns-668d6bf9bc-ds947" [f49da9d2-8a43-4611-8117-bcac60ee8f3e] Running
	I0203 10:34:44.514964  117311 system_pods.go:89] "csi-hostpath-attacher-0" [6eb9211b-f0cf-4734-8317-66461c7fbde0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0203 10:34:44.514976  117311 system_pods.go:89] "csi-hostpath-resizer-0" [2517fd36-efc5-4905-99a5-595749929505] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0203 10:34:44.514986  117311 system_pods.go:89] "csi-hostpathplugin-pftt6" [ab5452ba-a0f2-4e7a-b587-4f24e966f225] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0203 10:34:44.514992  117311 system_pods.go:89] "etcd-addons-106432" [a041b081-4ad2-4165-beb8-db59525c6f70] Running
	I0203 10:34:44.514998  117311 system_pods.go:89] "kube-apiserver-addons-106432" [94c4f755-76c5-4e36-a453-25160c064573] Running
	I0203 10:34:44.515009  117311 system_pods.go:89] "kube-controller-manager-addons-106432" [39da9b69-d020-4e80-876d-dc99beebae12] Running
	I0203 10:34:44.515016  117311 system_pods.go:89] "kube-ingress-dns-minikube" [6fe5a1f1-c97d-4944-a10e-0e441b6cc2b7] Running
	I0203 10:34:44.515023  117311 system_pods.go:89] "kube-proxy-shbn7" [c2d63525-8e0e-4aee-90ce-4cf830200aff] Running
	I0203 10:34:44.515028  117311 system_pods.go:89] "kube-scheduler-addons-106432" [662feab1-83f2-483e-93ff-72cf0d1b60dc] Running
	I0203 10:34:44.515037  117311 system_pods.go:89] "metrics-server-7fbb699795-cb689" [e4cd7001-8f29-40aa-8ff7-fed7f02eb492] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0203 10:34:44.515043  117311 system_pods.go:89] "nvidia-device-plugin-daemonset-xfb74" [0890e46b-b717-401e-a098-3ee68502198f] Running
	I0203 10:34:44.515055  117311 system_pods.go:89] "registry-6c88467877-ftnp8" [ffc00625-b39b-43ae-ae8e-ea7a8936124f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0203 10:34:44.515063  117311 system_pods.go:89] "registry-proxy-hlmqp" [5bb931f2-dd11-41dc-9467-c7cc823a3860] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0203 10:34:44.515076  117311 system_pods.go:89] "snapshot-controller-68b874b76f-9dsjh" [3e4fcb7b-4c53-41cc-8fbb-98523e448429] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0203 10:34:44.515095  117311 system_pods.go:89] "snapshot-controller-68b874b76f-brqcx" [df097690-2c76-499d-973f-f38247bc10df] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0203 10:34:44.515101  117311 system_pods.go:89] "storage-provisioner" [45166f6e-176b-4ead-ab8f-7b3b0f192ea7] Running
	I0203 10:34:44.515109  117311 system_pods.go:126] duration metric: took 205.325809ms to wait for k8s-apps to be running ...
	I0203 10:34:44.515117  117311 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 10:34:44.515164  117311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 10:34:44.545082  117311 system_svc.go:56] duration metric: took 29.953466ms WaitForService to wait for kubelet
	I0203 10:34:44.545124  117311 kubeadm.go:582] duration metric: took 35.777566393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 10:34:44.545153  117311 node_conditions.go:102] verifying NodePressure condition ...
	I0203 10:34:44.596030  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:44.597049  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:44.710580  117311 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 10:34:44.710611  117311 node_conditions.go:123] node cpu capacity is 2
	I0203 10:34:44.710625  117311 node_conditions.go:105] duration metric: took 165.465444ms to run NodePressure ...
	I0203 10:34:44.710640  117311 start.go:241] waiting for startup goroutines ...
	I0203 10:34:44.935921  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:44.945721  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:45.097492  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:45.097775  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:45.436050  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:45.445401  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:45.596983  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:45.597266  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:45.935816  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:45.945638  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:46.096217  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:46.096702  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:46.435899  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:46.444604  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:46.597201  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:46.597244  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:46.935125  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:46.945123  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:47.096208  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:47.098971  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:47.436669  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:47.446940  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:47.595479  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:47.597242  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:47.935465  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:47.945286  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:48.095836  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:48.097318  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:48.436739  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:48.446252  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:48.596873  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:48.597186  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:48.935398  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:48.945323  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:49.097566  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:49.097932  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:49.435695  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:49.445032  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:49.595947  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:49.596278  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:49.935714  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:49.949509  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:50.096548  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:50.097586  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:50.436228  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:50.445824  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:50.596367  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:50.596446  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:50.935997  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:50.944776  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:51.097215  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:51.098740  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:51.434876  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:51.444797  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:51.596791  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:51.597936  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:51.936271  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:51.945658  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:52.096462  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:52.097160  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:52.437360  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:52.445178  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:52.596537  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:52.597514  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:52.935539  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:52.945608  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:53.102998  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:53.103114  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:53.435283  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:53.444847  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:53.596591  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:53.596686  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:53.934652  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:53.945482  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:54.097743  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:54.097986  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:54.434987  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:54.445199  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:54.596333  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:54.596969  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:54.934638  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:54.945987  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:55.097462  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:55.098724  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:55.435677  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:55.445667  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:55.597033  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:55.598106  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:55.934857  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:55.947055  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:56.096010  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:56.098177  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:56.434785  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:56.445679  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:56.596262  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:56.596991  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:56.934523  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:56.945320  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:57.095928  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:57.096779  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:57.434979  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:57.445248  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:57.596326  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:57.597153  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:57.935275  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:57.945418  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:58.097007  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:58.097150  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0203 10:34:58.436546  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:58.445510  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:58.596695  117311 kapi.go:107] duration metric: took 42.00353762s to wait for kubernetes.io/minikube-addons=registry ...
	I0203 10:34:58.596829  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:58.935188  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:58.945615  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:59.096335  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:59.435604  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:59.446213  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:34:59.596672  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:34:59.935142  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:34:59.945583  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:00.097249  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:00.436464  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:00.446429  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:00.596850  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:00.935805  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:00.946117  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:01.095831  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:01.436267  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:01.444942  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:01.596863  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:01.935765  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:01.946253  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:02.096170  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:02.434626  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:02.446355  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:02.596562  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:02.936400  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:02.945946  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:03.097430  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:03.435895  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:03.446427  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:03.596378  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:03.934888  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:03.945296  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:04.096022  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:04.435291  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:04.447308  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:04.601085  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:04.936770  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:04.946104  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:05.096033  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:05.435675  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:05.445742  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:05.596218  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:05.934774  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:05.945551  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:06.095851  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:06.523988  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:06.524173  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:06.617739  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:06.934743  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:06.945795  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:07.096059  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:07.435980  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:07.445920  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:07.596836  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:07.936401  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:07.946279  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:08.096500  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:08.436935  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:08.446051  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:08.596652  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:08.934919  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:08.945228  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:09.231641  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:09.434836  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:09.445396  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:09.596504  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:09.935037  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:09.946135  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:10.096796  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:10.436337  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:10.445097  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:10.596839  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:10.936504  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:10.945788  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:11.097327  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:11.435208  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:11.446028  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:11.596863  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:11.935787  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:11.945346  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:12.095967  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:12.439369  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:12.538388  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:12.595952  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:12.935157  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:12.945931  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:13.097764  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:13.438644  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:13.444698  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:13.597537  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:13.941048  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:14.045537  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:14.146084  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:14.441563  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:14.541625  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:14.596827  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:14.934313  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:14.945601  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:15.096015  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:15.435285  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:15.446185  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:15.596025  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:15.936562  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:15.945386  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:16.096179  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:16.436678  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:16.445460  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:16.601307  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:16.935591  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:16.946208  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:17.095830  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:17.435356  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:17.445197  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:17.595831  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:17.935846  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:17.946165  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:18.095513  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:18.435501  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:18.445938  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:18.630166  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:18.935561  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:18.945658  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:19.096267  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:19.435304  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:19.446151  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:19.596802  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:19.935518  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:19.945209  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:20.095490  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:20.435554  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:20.445322  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:20.598552  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:20.936209  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:20.945713  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:21.096383  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:21.435533  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:21.444892  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:21.599918  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:21.935768  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:21.945727  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:22.096783  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:22.453420  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:22.549575  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:22.596094  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:22.935580  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:22.946733  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:23.096535  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:23.434925  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:23.444390  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:23.595995  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:23.938838  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:23.946904  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:24.405089  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:24.438375  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:24.446561  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:24.596779  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:24.935168  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:24.946294  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:25.096759  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:25.435484  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:25.445460  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:25.596476  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:25.934872  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:25.944397  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:26.099709  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:26.435589  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:26.445543  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:26.595945  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:26.934200  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:26.945512  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:27.404493  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:27.504534  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:27.507035  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:27.595372  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:27.934679  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:27.946060  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:28.096219  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:28.434585  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:28.445358  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:28.596039  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:28.941916  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:29.041543  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:29.096345  117311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0203 10:35:29.437907  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:29.536526  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:29.596527  117311 kapi.go:107] duration metric: took 1m13.00476483s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0203 10:35:29.935000  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:29.945991  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:30.435374  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:30.454469  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:30.935420  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:30.945118  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:31.436107  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:31.444957  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:31.936123  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:31.944943  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:32.435928  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:32.444856  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:32.935067  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:32.945197  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:33.435584  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:33.445417  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:33.940294  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:33.945402  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:34.438575  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:34.449342  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:34.935786  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:34.944746  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0203 10:35:35.435281  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:35.445031  117311 kapi.go:107] duration metric: took 1m16.003135421s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0203 10:35:35.447039  117311 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-106432 cluster.
	I0203 10:35:35.448562  117311 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0203 10:35:35.449780  117311 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0203 10:35:35.936273  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:36.436066  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:36.935387  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:37.437956  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:37.935002  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:38.434996  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:38.937074  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:39.437781  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:39.935484  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:40.435302  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:40.935002  117311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0203 10:35:41.435115  117311 kapi.go:107] duration metric: took 1m23.50463714s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0203 10:35:41.437141  117311 out.go:177] * Enabled addons: amd-gpu-device-plugin, ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, default-storageclass, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0203 10:35:41.438373  117311 addons.go:514] duration metric: took 1m32.670764632s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner default-storageclass inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0203 10:35:41.438425  117311 start.go:246] waiting for cluster config update ...
	I0203 10:35:41.438444  117311 start.go:255] writing updated cluster config ...
	I0203 10:35:41.438713  117311 ssh_runner.go:195] Run: rm -f paused
	I0203 10:35:41.490770  117311 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0203 10:35:41.492585  117311 out.go:177] * Done! kubectl is now configured to use "addons-106432" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.065485365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b73bc17-ea52-472a-b8ca-bfe0e2344318 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.065778283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f721f698cd8364fcc19e9a0e289b56754d74819588782d21bbe55cee36f24b06,PodSandboxId:c29bf830a83f3e99d7a9dc0ccba3847a71706b01d0b6d8453b951d7695975984,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1738578985117377942,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 353ea63f-c6cb-41d2-a99b-ede66853eb91,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2190c0a6ea65d06928df9b15b90eb28360a24dca35c86bfdf68192ed025f355,PodSandboxId:d0072670969524c393b4f83d4ae9cce04c6f7b7acf735557b4acb7048e23bda5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1738578945710421417,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f036f510-1b88-40c2-9d32-f66a37079606,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38582ed1f05db1a5b08cf643ebf1a44caf346288f0daabbb8e6bd6c913dabb4d,PodSandboxId:f422dd9280be80a4e957ab9caa5b22fbf7ce245f345ab39fe62f15c442e34da1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1738578928836925560,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-9mh5n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9152e527-6840-4b29-9985-81b4dabd477f,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5902e5afb7486d2f3672176575d9818eb3549507b3c26d19e94bb54cd851855e,PodSandboxId:e592738b8ade0556a9fcc0cdd64fe44d2ffa523944a6d136578d58578d0fcde2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2b
b6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1738578881444878906,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pp748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359ec12e-0b07-4d21-a1d1-00ff3f23757e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad3144abee15b57a2658c16d816e446e898aef49cf6a63fb4f05a758aec2056,PodSandboxId:fd2307bc2abbf9f9cf40d89273ee7f8a802e10fd8e4ffe603c6986e58fe5e5ad,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1738578864488678844,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fe5a1f1-c97d-4944-a10e-0e441b6cc2b7,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebf2124d50c9acd72577cb9fc1cd18c1089dc8c18b9d6343119262e4c28a3a9,PodSandboxId:2b6b907f9c52df4ddbbb387043c177a3139fc5e74d9cdace4e9821202f91c06d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f56
17342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738578859312027977,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45166f6e-176b-4ead-ab8f-7b3b0f192ea7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc511f18c500826ef5ae42cb3afb605813bce868ff3025787d5760acfdb5237,PodSandboxId:d7e4b944db7c3cfbe6fd16b3e28002e46d309596cb3f38557aa4f9f283893439,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f9159
6bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738578854319801355,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-ds947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f49da9d2-8a43-4611-8117-bcac60ee8f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536637dfeba3237afa2f8787772774892fe0f09b9db44f275ae7020f8d095ae3,Pod
SandboxId:49c4ce77f5cec93fd96ee57be55d7387c3b3c5fc2c2779f5451c369d8f599dc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738578852029222637,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shbn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2d63525-8e0e-4aee-90ce-4cf830200aff,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b5b498356802f51da944d48c182a3ee5a0d3c5e2a94184253b3c66634b419,PodSandboxId:f67741fc69c3acad39e1ac7
dfef70d82e067a63ee76a35b3ad48409e1c2b9d4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738578839670642366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1f0f8abc5085af1e3342d2d68f4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c579e8bd8457c428af6ba1461a9fa6a791d6a89f3e8b05f33839e3d024a452b7,PodSandboxId:3047d01a57fed3365404b7b4b8c0c8569ea27579
2146fb823c19b5ca31877ad7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738578839639348135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bc98dad97545ffdf9c5b6409debaa35,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2c0dc18b6cee3d93e7694f74e2e95c2938b61ade8544205db77c3e936ba1fa,PodSandboxId:5daa3d0702bf9f0661b8ba54960906c6557a2b577e7b2158e9b6badfe
1b189a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738578839637938117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5f6eab5dbb79be6480b9bcbc7f6b30,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc53c95c4d6e707207ccc513b79bb42913ade751a8ace62d7c730c7b176aedc,PodSandboxId:342a09a496331f755ead46cfc439a3252099bc2f30d1fdf6028a3cae8fb8ab8f,Metadata:&ContainerMetadata{Name:kube-c
ontroller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738578839561975604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4744ac3212eff59245fd6bc14dddc2c9,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b73bc17-ea52-472a-b8ca-bfe0e2344318 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.084110448Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=467040b9-c99b-4787-8393-5898f7c4a6f1 name=/runtime.v1.RuntimeService/Version
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.084190094Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=467040b9-c99b-4787-8393-5898f7c4a6f1 name=/runtime.v1.RuntimeService/Version
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.085041146Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7beea8e2-c203-420b-8d40-ca0e06579d4f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.086196701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738579123086170909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7beea8e2-c203-420b-8d40-ca0e06579d4f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.086656313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75292673-d8f2-416f-8570-f67aecdc03cf name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.086710526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75292673-d8f2-416f-8570-f67aecdc03cf name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.087083625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f721f698cd8364fcc19e9a0e289b56754d74819588782d21bbe55cee36f24b06,PodSandboxId:c29bf830a83f3e99d7a9dc0ccba3847a71706b01d0b6d8453b951d7695975984,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1738578985117377942,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 353ea63f-c6cb-41d2-a99b-ede66853eb91,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2190c0a6ea65d06928df9b15b90eb28360a24dca35c86bfdf68192ed025f355,PodSandboxId:d0072670969524c393b4f83d4ae9cce04c6f7b7acf735557b4acb7048e23bda5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1738578945710421417,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f036f510-1b88-40c2-9d32-f66a37079606,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38582ed1f05db1a5b08cf643ebf1a44caf346288f0daabbb8e6bd6c913dabb4d,PodSandboxId:f422dd9280be80a4e957ab9caa5b22fbf7ce245f345ab39fe62f15c442e34da1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1738578928836925560,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-9mh5n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9152e527-6840-4b29-9985-81b4dabd477f,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ef163544723fd48bad017f2241e6aff7fc314d27b014c686a5877e2b37c18f49,PodSandboxId:653438cd65e22e1242f0776155f85d04dbb319a5bb42dc3a1a2b1999b00bcb51,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1738578917484801261,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ztd2k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d4425375-5b0e-455c-831c-426db8d0568a,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a071b149dfd62b5834c5f34f1fd756aa5e477b9c19ad3779e31a3f9e5bf6e2,PodSandboxId:b1cd9572b1573f7107ad1060bb557ca3ca67dcf5e8972f094a31fc426632a150,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1738578916433372970,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qpwk2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 097f112b-7675-4f9e-b7d1-b0967ba18145,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5902e5afb7486d2f3672176575d9818eb3549507b3c26d19e94bb54cd851855e,PodSandboxId:e592738b8ade0556a9fcc0cdd64fe44d2ffa523944a6d136578d58578d0fcde2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1738578881444878906,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pp748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359ec12e-0b07-4d21-a1d1-00ff3f23757e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad3144abee15b57a2658c16d816e446e898aef49cf6a63fb4f05a758aec2056,PodSandboxId:fd2307bc2abbf9f9cf40d89273ee7f8a802e10fd8e4ffe603c6986e58fe5e5ad,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1738578864488678844,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fe5a1f1-c97d-4944-a10e-0e441b6cc2b7,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebf2124d50c9acd72577cb9fc1cd18c1089dc8c18b9d6343119262e4c28a3a9,PodSandboxId:2b6b907f9c52df4ddbbb387043c177a3139fc5e74d9cdace4e9821202f91c06d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738578859312027977,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45166f6e-176b-4ead-ab8f-7b3b0f192ea7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9692215ef05ab45e2e0a419ebfa8f604962454cc3644372af925098fc815367,PodSandboxId:2b6b907f9c52df4ddbbb387043c177a3139fc5e74d9cdace4e9821202f91c06d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002
f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738578855268232905,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45166f6e-176b-4ead-ab8f-7b3b0f192ea7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc511f18c500826ef5ae42cb3afb605813bce868ff3025787d5760acfdb5237,PodSandboxId:d7e4b944db7c3cfbe6fd16b3e28002e46d309596cb3f38557aa4f9f283893439,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91
596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738578854319801355,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-ds947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f49da9d2-8a43-4611-8117-bcac60ee8f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536637dfeba3237afa2f8787772774892fe0f09b9db44f275ae7020f8d095ae3,P
odSandboxId:49c4ce77f5cec93fd96ee57be55d7387c3b3c5fc2c2779f5451c369d8f599dc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738578852029222637,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shbn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2d63525-8e0e-4aee-90ce-4cf830200aff,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b5b498356802f51da944d48c182a3ee5a0d3c5e2a94184253b3c66634b419,PodSandboxId:f67741fc69c3acad39e1a
c7dfef70d82e067a63ee76a35b3ad48409e1c2b9d4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738578839670642366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1f0f8abc5085af1e3342d2d68f4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c579e8bd8457c428af6ba1461a9fa6a791d6a89f3e8b05f33839e3d024a452b7,PodSandboxId:3047d01a57fed3365404b7b4b8c0c8569ea275
792146fb823c19b5ca31877ad7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738578839639348135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bc98dad97545ffdf9c5b6409debaa35,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2c0dc18b6cee3d93e7694f74e2e95c2938b61ade8544205db77c3e936ba1fa,PodSandboxId:5daa3d0702bf9f0661b8ba54960906c6557a2b577e7b2158e9b6bad
fe1b189a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738578839637938117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5f6eab5dbb79be6480b9bcbc7f6b30,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc53c95c4d6e707207ccc513b79bb42913ade751a8ace62d7c730c7b176aedc,PodSandboxId:342a09a496331f755ead46cfc439a3252099bc2f30d1fdf6028a3cae8fb8ab8f,Metadata:&ContainerMetadata{Name:kube
-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738578839561975604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4744ac3212eff59245fd6bc14dddc2c9,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75292673-d8f2-416f-8570-f67aecdc03cf name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.125251336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=99370429-6ee7-433e-b9a0-61811e2ae364 name=/runtime.v1.RuntimeService/Version
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.125325875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=99370429-6ee7-433e-b9a0-61811e2ae364 name=/runtime.v1.RuntimeService/Version
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.126982961Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3cad9687-6c2c-42a8-90e7-f1c61266bbbd name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.128204800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738579123128179705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3cad9687-6c2c-42a8-90e7-f1c61266bbbd name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.128690161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfdc21fc-969d-42e4-ae33-bf7cd801205b name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.128790408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfdc21fc-969d-42e4-ae33-bf7cd801205b name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.129173008Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f721f698cd8364fcc19e9a0e289b56754d74819588782d21bbe55cee36f24b06,PodSandboxId:c29bf830a83f3e99d7a9dc0ccba3847a71706b01d0b6d8453b951d7695975984,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1738578985117377942,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 353ea63f-c6cb-41d2-a99b-ede66853eb91,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2190c0a6ea65d06928df9b15b90eb28360a24dca35c86bfdf68192ed025f355,PodSandboxId:d0072670969524c393b4f83d4ae9cce04c6f7b7acf735557b4acb7048e23bda5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1738578945710421417,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f036f510-1b88-40c2-9d32-f66a37079606,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38582ed1f05db1a5b08cf643ebf1a44caf346288f0daabbb8e6bd6c913dabb4d,PodSandboxId:f422dd9280be80a4e957ab9caa5b22fbf7ce245f345ab39fe62f15c442e34da1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1738578928836925560,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-9mh5n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9152e527-6840-4b29-9985-81b4dabd477f,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ef163544723fd48bad017f2241e6aff7fc314d27b014c686a5877e2b37c18f49,PodSandboxId:653438cd65e22e1242f0776155f85d04dbb319a5bb42dc3a1a2b1999b00bcb51,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1738578917484801261,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ztd2k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d4425375-5b0e-455c-831c-426db8d0568a,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a071b149dfd62b5834c5f34f1fd756aa5e477b9c19ad3779e31a3f9e5bf6e2,PodSandboxId:b1cd9572b1573f7107ad1060bb557ca3ca67dcf5e8972f094a31fc426632a150,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1738578916433372970,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qpwk2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 097f112b-7675-4f9e-b7d1-b0967ba18145,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5902e5afb7486d2f3672176575d9818eb3549507b3c26d19e94bb54cd851855e,PodSandboxId:e592738b8ade0556a9fcc0cdd64fe44d2ffa523944a6d136578d58578d0fcde2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1738578881444878906,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pp748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359ec12e-0b07-4d21-a1d1-00ff3f23757e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad3144abee15b57a2658c16d816e446e898aef49cf6a63fb4f05a758aec2056,PodSandboxId:fd2307bc2abbf9f9cf40d89273ee7f8a802e10fd8e4ffe603c6986e58fe5e5ad,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1738578864488678844,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fe5a1f1-c97d-4944-a10e-0e441b6cc2b7,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebf2124d50c9acd72577cb9fc1cd18c1089dc8c18b9d6343119262e4c28a3a9,PodSandboxId:2b6b907f9c52df4ddbbb387043c177a3139fc5e74d9cdace4e9821202f91c06d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738578859312027977,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45166f6e-176b-4ead-ab8f-7b3b0f192ea7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9692215ef05ab45e2e0a419ebfa8f604962454cc3644372af925098fc815367,PodSandboxId:2b6b907f9c52df4ddbbb387043c177a3139fc5e74d9cdace4e9821202f91c06d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002
f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738578855268232905,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45166f6e-176b-4ead-ab8f-7b3b0f192ea7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc511f18c500826ef5ae42cb3afb605813bce868ff3025787d5760acfdb5237,PodSandboxId:d7e4b944db7c3cfbe6fd16b3e28002e46d309596cb3f38557aa4f9f283893439,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91
596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738578854319801355,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-ds947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f49da9d2-8a43-4611-8117-bcac60ee8f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536637dfeba3237afa2f8787772774892fe0f09b9db44f275ae7020f8d095ae3,P
odSandboxId:49c4ce77f5cec93fd96ee57be55d7387c3b3c5fc2c2779f5451c369d8f599dc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738578852029222637,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shbn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2d63525-8e0e-4aee-90ce-4cf830200aff,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b5b498356802f51da944d48c182a3ee5a0d3c5e2a94184253b3c66634b419,PodSandboxId:f67741fc69c3acad39e1a
c7dfef70d82e067a63ee76a35b3ad48409e1c2b9d4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738578839670642366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1f0f8abc5085af1e3342d2d68f4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c579e8bd8457c428af6ba1461a9fa6a791d6a89f3e8b05f33839e3d024a452b7,PodSandboxId:3047d01a57fed3365404b7b4b8c0c8569ea275
792146fb823c19b5ca31877ad7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738578839639348135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bc98dad97545ffdf9c5b6409debaa35,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2c0dc18b6cee3d93e7694f74e2e95c2938b61ade8544205db77c3e936ba1fa,PodSandboxId:5daa3d0702bf9f0661b8ba54960906c6557a2b577e7b2158e9b6bad
fe1b189a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738578839637938117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5f6eab5dbb79be6480b9bcbc7f6b30,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc53c95c4d6e707207ccc513b79bb42913ade751a8ace62d7c730c7b176aedc,PodSandboxId:342a09a496331f755ead46cfc439a3252099bc2f30d1fdf6028a3cae8fb8ab8f,Metadata:&ContainerMetadata{Name:kube
-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738578839561975604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4744ac3212eff59245fd6bc14dddc2c9,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfdc21fc-969d-42e4-ae33-bf7cd801205b name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.167353966Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f27d993-1046-4bf6-9cfd-b1ab6cfa0eef name=/runtime.v1.RuntimeService/Version
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.167424033Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f27d993-1046-4bf6-9cfd-b1ab6cfa0eef name=/runtime.v1.RuntimeService/Version
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.168698329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=188379da-40a7-40ac-ab37-b21837b505c1 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.169923810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738579123169897212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=188379da-40a7-40ac-ab37-b21837b505c1 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.170538799Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a9c5ad1-5f5e-4a05-9be7-8de0ef3bcf45 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.170591719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a9c5ad1-5f5e-4a05-9be7-8de0ef3bcf45 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.170988141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f721f698cd8364fcc19e9a0e289b56754d74819588782d21bbe55cee36f24b06,PodSandboxId:c29bf830a83f3e99d7a9dc0ccba3847a71706b01d0b6d8453b951d7695975984,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1738578985117377942,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 353ea63f-c6cb-41d2-a99b-ede66853eb91,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2190c0a6ea65d06928df9b15b90eb28360a24dca35c86bfdf68192ed025f355,PodSandboxId:d0072670969524c393b4f83d4ae9cce04c6f7b7acf735557b4acb7048e23bda5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1738578945710421417,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f036f510-1b88-40c2-9d32-f66a37079606,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38582ed1f05db1a5b08cf643ebf1a44caf346288f0daabbb8e6bd6c913dabb4d,PodSandboxId:f422dd9280be80a4e957ab9caa5b22fbf7ce245f345ab39fe62f15c442e34da1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1738578928836925560,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-9mh5n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9152e527-6840-4b29-9985-81b4dabd477f,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ef163544723fd48bad017f2241e6aff7fc314d27b014c686a5877e2b37c18f49,PodSandboxId:653438cd65e22e1242f0776155f85d04dbb319a5bb42dc3a1a2b1999b00bcb51,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1738578917484801261,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ztd2k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d4425375-5b0e-455c-831c-426db8d0568a,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a071b149dfd62b5834c5f34f1fd756aa5e477b9c19ad3779e31a3f9e5bf6e2,PodSandboxId:b1cd9572b1573f7107ad1060bb557ca3ca67dcf5e8972f094a31fc426632a150,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1738578916433372970,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qpwk2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 097f112b-7675-4f9e-b7d1-b0967ba18145,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5902e5afb7486d2f3672176575d9818eb3549507b3c26d19e94bb54cd851855e,PodSandboxId:e592738b8ade0556a9fcc0cdd64fe44d2ffa523944a6d136578d58578d0fcde2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1738578881444878906,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pp748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359ec12e-0b07-4d21-a1d1-00ff3f23757e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad3144abee15b57a2658c16d816e446e898aef49cf6a63fb4f05a758aec2056,PodSandboxId:fd2307bc2abbf9f9cf40d89273ee7f8a802e10fd8e4ffe603c6986e58fe5e5ad,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1738578864488678844,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fe5a1f1-c97d-4944-a10e-0e441b6cc2b7,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebf2124d50c9acd72577cb9fc1cd18c1089dc8c18b9d6343119262e4c28a3a9,PodSandboxId:2b6b907f9c52df4ddbbb387043c177a3139fc5e74d9cdace4e9821202f91c06d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738578859312027977,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45166f6e-176b-4ead-ab8f-7b3b0f192ea7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9692215ef05ab45e2e0a419ebfa8f604962454cc3644372af925098fc815367,PodSandboxId:2b6b907f9c52df4ddbbb387043c177a3139fc5e74d9cdace4e9821202f91c06d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002
f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738578855268232905,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45166f6e-176b-4ead-ab8f-7b3b0f192ea7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc511f18c500826ef5ae42cb3afb605813bce868ff3025787d5760acfdb5237,PodSandboxId:d7e4b944db7c3cfbe6fd16b3e28002e46d309596cb3f38557aa4f9f283893439,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91
596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738578854319801355,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-ds947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f49da9d2-8a43-4611-8117-bcac60ee8f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536637dfeba3237afa2f8787772774892fe0f09b9db44f275ae7020f8d095ae3,P
odSandboxId:49c4ce77f5cec93fd96ee57be55d7387c3b3c5fc2c2779f5451c369d8f599dc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738578852029222637,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shbn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2d63525-8e0e-4aee-90ce-4cf830200aff,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8b5b498356802f51da944d48c182a3ee5a0d3c5e2a94184253b3c66634b419,PodSandboxId:f67741fc69c3acad39e1a
c7dfef70d82e067a63ee76a35b3ad48409e1c2b9d4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738578839670642366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1f0f8abc5085af1e3342d2d68f4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c579e8bd8457c428af6ba1461a9fa6a791d6a89f3e8b05f33839e3d024a452b7,PodSandboxId:3047d01a57fed3365404b7b4b8c0c8569ea275
792146fb823c19b5ca31877ad7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738578839639348135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bc98dad97545ffdf9c5b6409debaa35,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2c0dc18b6cee3d93e7694f74e2e95c2938b61ade8544205db77c3e936ba1fa,PodSandboxId:5daa3d0702bf9f0661b8ba54960906c6557a2b577e7b2158e9b6bad
fe1b189a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738578839637938117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5f6eab5dbb79be6480b9bcbc7f6b30,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc53c95c4d6e707207ccc513b79bb42913ade751a8ace62d7c730c7b176aedc,PodSandboxId:342a09a496331f755ead46cfc439a3252099bc2f30d1fdf6028a3cae8fb8ab8f,Metadata:&ContainerMetadata{Name:kube
-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738578839561975604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-106432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4744ac3212eff59245fd6bc14dddc2c9,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a9c5ad1-5f5e-4a05-9be7-8de0ef3bcf45 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.188939424Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Feb 03 10:38:43 addons-106432 crio[657]: time="2025-02-03 10:38:43.189192410Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f721f698cd836       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                              2 minutes ago       Running             nginx                     0                   c29bf830a83f3       nginx
	e2190c0a6ea65       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   d007267096952       busybox
	38582ed1f05db       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   f422dd9280be8       ingress-nginx-controller-56d7c84fd4-9mh5n
	ef163544723fd       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   653438cd65e22       ingress-nginx-admission-patch-ztd2k
	07a071b149dfd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   b1cd9572b1573       ingress-nginx-admission-create-qpwk2
	5902e5afb7486       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   e592738b8ade0       amd-gpu-device-plugin-pp748
	5ad3144abee15       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   fd2307bc2abbf       kube-ingress-dns-minikube
	7ebf2124d50c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       1                   2b6b907f9c52d       storage-provisioner
	b9692215ef05a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Exited              storage-provisioner       0                   2b6b907f9c52d       storage-provisioner
	0cc511f18c500       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   d7e4b944db7c3       coredns-668d6bf9bc-ds947
	536637dfeba32       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             4 minutes ago       Running             kube-proxy                0                   49c4ce77f5cec       kube-proxy-shbn7
	cc8b5b4983568       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             4 minutes ago       Running             kube-apiserver            0                   f67741fc69c3a       kube-apiserver-addons-106432
	c579e8bd8457c       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             4 minutes ago       Running             kube-scheduler            0                   3047d01a57fed       kube-scheduler-addons-106432
	4f2c0dc18b6ce       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   5daa3d0702bf9       etcd-addons-106432
	fdc53c95c4d6e       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             4 minutes ago       Running             kube-controller-manager   0                   342a09a496331       kube-controller-manager-addons-106432
	
	
	==> coredns [0cc511f18c500826ef5ae42cb3afb605813bce868ff3025787d5760acfdb5237] <==
	[INFO] 10.244.0.6:34535 - 7778 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000082322s
	[INFO] 10.244.0.6:34535 - 58276 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000140299s
	[INFO] 10.244.0.6:34535 - 38039 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000210477s
	[INFO] 10.244.0.6:34535 - 20318 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000082756s
	[INFO] 10.244.0.6:34535 - 4097 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000053787s
	[INFO] 10.244.0.6:34535 - 38669 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000075364s
	[INFO] 10.244.0.6:34535 - 19847 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000078828s
	[INFO] 10.244.0.6:37859 - 51494 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000173179s
	[INFO] 10.244.0.6:37859 - 51231 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00015547s
	[INFO] 10.244.0.6:57031 - 60172 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000136303s
	[INFO] 10.244.0.6:57031 - 59927 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000118861s
	[INFO] 10.244.0.6:51362 - 55250 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111215s
	[INFO] 10.244.0.6:51362 - 54785 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099904s
	[INFO] 10.244.0.6:40780 - 64276 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000090372s
	[INFO] 10.244.0.6:40780 - 64105 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101538s
	[INFO] 10.244.0.22:44292 - 32040 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000652295s
	[INFO] 10.244.0.22:42650 - 1719 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.003373444s
	[INFO] 10.244.0.22:36549 - 7306 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000215616s
	[INFO] 10.244.0.22:53311 - 27839 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013693s
	[INFO] 10.244.0.22:40512 - 47585 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098186s
	[INFO] 10.244.0.22:38266 - 19670 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000112997s
	[INFO] 10.244.0.22:47041 - 17899 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004295387s
	[INFO] 10.244.0.22:56918 - 57443 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.003992059s
	[INFO] 10.244.0.26:52979 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00028565s
	[INFO] 10.244.0.26:53969 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149747s
	
	
	==> describe nodes <==
	Name:               addons-106432
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-106432
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=addons-106432
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_03T10_34_05_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-106432
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 10:34:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-106432
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 10:38:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 10:36:38 +0000   Mon, 03 Feb 2025 10:34:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 10:36:38 +0000   Mon, 03 Feb 2025 10:34:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 10:36:38 +0000   Mon, 03 Feb 2025 10:34:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 10:36:38 +0000   Mon, 03 Feb 2025 10:34:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    addons-106432
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 605211045a56449e970fab2b796db1a7
	  System UUID:                60521104-5a56-449e-970f-ab2b796db1a7
	  Boot ID:                    7529acc7-321e-4e8f-9af4-fb1d6ea57ab8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  default                     hello-world-app-7d9564db4-qrw2h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-9mh5n    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m27s
	  kube-system                 amd-gpu-device-plugin-pp748                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 coredns-668d6bf9bc-ds947                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m34s
	  kube-system                 etcd-addons-106432                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m38s
	  kube-system                 kube-apiserver-addons-106432                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-controller-manager-addons-106432        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-proxy-shbn7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-scheduler-addons-106432                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m28s  kube-proxy       
	  Normal  Starting                 4m38s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m38s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m38s  kubelet          Node addons-106432 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s  kubelet          Node addons-106432 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s  kubelet          Node addons-106432 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m38s  kubelet          Node addons-106432 status is now: NodeReady
	  Normal  RegisteredNode           4m35s  node-controller  Node addons-106432 event: Registered Node addons-106432 in Controller
	
	
	==> dmesg <==
	[Feb 3 10:34] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.075452] kauditd_printk_skb: 69 callbacks suppressed
	[  +3.995739] systemd-fstab-generator[1342]: Ignoring "noauto" option for root device
	[  +1.033367] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.044211] kauditd_printk_skb: 87 callbacks suppressed
	[  +5.000864] kauditd_printk_skb: 170 callbacks suppressed
	[  +6.374206] kauditd_printk_skb: 30 callbacks suppressed
	[ +26.807160] kauditd_printk_skb: 6 callbacks suppressed
	[Feb 3 10:35] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.692585] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.901707] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.874045] kauditd_printk_skb: 55 callbacks suppressed
	[  +6.676321] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.053642] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.334845] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.490562] kauditd_printk_skb: 11 callbacks suppressed
	[Feb 3 10:36] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.331848] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.059572] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.184862] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.889694] kauditd_printk_skb: 53 callbacks suppressed
	[  +7.242570] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.280259] kauditd_printk_skb: 26 callbacks suppressed
	[ +12.576947] kauditd_printk_skb: 19 callbacks suppressed
	[Feb 3 10:37] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [4f2c0dc18b6cee3d93e7694f74e2e95c2938b61ade8544205db77c3e936ba1fa] <==
	{"level":"warn","ts":"2025-02-03T10:35:57.234962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.783009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2025-02-03T10:35:57.235045Z","caller":"traceutil/trace.go:171","msg":"trace[540498564] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1220; }","duration":"288.913509ms","start":"2025-02-03T10:35:56.946117Z","end":"2025-02-03T10:35:57.235031Z","steps":["trace[540498564] 'range keys from in-memory index tree'  (duration: 288.67417ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T10:35:57.235217Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.257483ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T10:35:57.235267Z","caller":"traceutil/trace.go:171","msg":"trace[2068781011] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1220; }","duration":"188.316814ms","start":"2025-02-03T10:35:57.046940Z","end":"2025-02-03T10:35:57.235257Z","steps":["trace[2068781011] 'range keys from in-memory index tree'  (duration: 188.248112ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T10:36:07.801110Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.505976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" limit:1 ","response":"range_response_count:1 size:2270"}
	{"level":"info","ts":"2025-02-03T10:36:07.801183Z","caller":"traceutil/trace.go:171","msg":"trace[2137338699] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:1305; }","duration":"151.604147ms","start":"2025-02-03T10:36:07.649568Z","end":"2025-02-03T10:36:07.801172Z","steps":["trace[2137338699] 'range keys from in-memory index tree'  (duration: 151.416063ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T10:36:07.801310Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.518504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-02-03T10:36:07.801344Z","caller":"traceutil/trace.go:171","msg":"trace[837587415] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1305; }","duration":"135.573256ms","start":"2025-02-03T10:36:07.665765Z","end":"2025-02-03T10:36:07.801338Z","steps":["trace[837587415] 'range keys from in-memory index tree'  (duration: 135.425291ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T10:36:07.801554Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.408187ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T10:36:07.801631Z","caller":"traceutil/trace.go:171","msg":"trace[549564364] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1305; }","duration":"106.505329ms","start":"2025-02-03T10:36:07.695118Z","end":"2025-02-03T10:36:07.801623Z","steps":["trace[549564364] 'range keys from in-memory index tree'  (duration: 106.366274ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-03T10:36:11.935873Z","caller":"traceutil/trace.go:171","msg":"trace[1970862874] transaction","detail":"{read_only:false; response_revision:1350; number_of_response:1; }","duration":"105.825142ms","start":"2025-02-03T10:36:11.830033Z","end":"2025-02-03T10:36:11.935858Z","steps":["trace[1970862874] 'process raft request'  (duration: 105.673088ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T10:36:24.290898Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.906985ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T10:36:24.291008Z","caller":"traceutil/trace.go:171","msg":"trace[1761123146] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1501; }","duration":"244.025392ms","start":"2025-02-03T10:36:24.046973Z","end":"2025-02-03T10:36:24.290998Z","steps":["trace[1761123146] 'range keys from in-memory index tree'  (duration: 243.897254ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T10:36:24.291154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.835911ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12271346685884066477 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" mod_revision:1501 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" value_size:906 >> failure:<request_range:<key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-02-03T10:36:24.291183Z","caller":"traceutil/trace.go:171","msg":"trace[121416420] linearizableReadLoop","detail":"{readStateIndex:1554; appliedIndex:1553; }","duration":"411.193273ms","start":"2025-02-03T10:36:23.879984Z","end":"2025-02-03T10:36:24.291177Z","steps":["trace[121416420] 'read index received'  (duration: 76.216613ms)","trace[121416420] 'applied index is now lower than readState.Index'  (duration: 334.976275ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-03T10:36:24.291340Z","caller":"traceutil/trace.go:171","msg":"trace[1969859923] transaction","detail":"{read_only:false; response_revision:1502; number_of_response:1; }","duration":"411.411794ms","start":"2025-02-03T10:36:23.879921Z","end":"2025-02-03T10:36:24.291333Z","steps":["trace[1969859923] 'process raft request'  (duration: 76.344853ms)","trace[1969859923] 'compare'  (duration: 334.553244ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-03T10:36:24.291393Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-03T10:36:23.879904Z","time spent":"411.450817ms","remote":"127.0.0.1:57936","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":967,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" mod_revision:1501 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" value_size:906 >> failure:<request_range:<key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" > >"}
	{"level":"warn","ts":"2025-02-03T10:36:24.291518Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"411.532306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T10:36:24.291576Z","caller":"traceutil/trace.go:171","msg":"trace[1291080280] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1502; }","duration":"411.606135ms","start":"2025-02-03T10:36:23.879965Z","end":"2025-02-03T10:36:24.291571Z","steps":["trace[1291080280] 'agreement among raft nodes before linearized reading'  (duration: 411.536092ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T10:36:24.291606Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-03T10:36:23.879954Z","time spent":"411.646181ms","remote":"127.0.0.1:57966","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-02-03T10:36:24.291704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.692062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" limit:1 ","response":"range_response_count:1 size:982"}
	{"level":"info","ts":"2025-02-03T10:36:24.291768Z","caller":"traceutil/trace.go:171","msg":"trace[445585711] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1502; }","duration":"316.738213ms","start":"2025-02-03T10:36:23.974988Z","end":"2025-02-03T10:36:24.291727Z","steps":["trace[445585711] 'agreement among raft nodes before linearized reading'  (duration: 316.695122ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T10:36:24.291787Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-03T10:36:23.974971Z","time spent":"316.810122ms","remote":"127.0.0.1:57936","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":1006,"request content":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" limit:1 "}
	{"level":"warn","ts":"2025-02-03T10:36:24.292132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.441815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-02-03T10:36:24.292204Z","caller":"traceutil/trace.go:171","msg":"trace[2011739760] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1502; }","duration":"286.53592ms","start":"2025-02-03T10:36:24.005662Z","end":"2025-02-03T10:36:24.292198Z","steps":["trace[2011739760] 'agreement among raft nodes before linearized reading'  (duration: 286.362251ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:38:43 up 5 min,  0 users,  load average: 0.32, 1.11, 0.61
	Linux addons-106432 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cc8b5b498356802f51da944d48c182a3ee5a0d3c5e2a94184253b3c66634b419] <==
	I0203 10:34:48.258485       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0203 10:35:53.236259       1 conn.go:339] Error on socket receive: read tcp 192.168.39.50:8443->192.168.39.1:46744: use of closed network connection
	E0203 10:35:53.418181       1 conn.go:339] Error on socket receive: read tcp 192.168.39.50:8443->192.168.39.1:46774: use of closed network connection
	I0203 10:36:02.638024       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.146.64"}
	I0203 10:36:14.913598       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0203 10:36:15.955479       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0203 10:36:20.060330       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0203 10:36:20.367962       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.71.135"}
	I0203 10:36:21.163303       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0203 10:36:49.256062       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0203 10:36:51.037232       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0203 10:36:51.649171       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0203 10:36:51.649244       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0203 10:36:51.666958       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0203 10:36:51.667100       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0203 10:36:51.680098       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0203 10:36:51.680151       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0203 10:36:51.697291       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0203 10:36:51.697555       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0203 10:36:51.730793       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0203 10:36:51.730911       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0203 10:36:52.680262       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0203 10:36:52.731830       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0203 10:36:52.828294       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0203 10:38:42.017281       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.194.67"}
	
	
	==> kube-controller-manager [fdc53c95c4d6e707207ccc513b79bb42913ade751a8ace62d7c730c7b176aedc] <==
	W0203 10:37:54.524714       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 10:37:54.524829       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0203 10:38:07.433444       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0203 10:38:07.434388       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0203 10:38:07.435123       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 10:38:07.435164       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0203 10:38:10.876903       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0203 10:38:10.877724       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0203 10:38:10.878559       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 10:38:10.878627       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0203 10:38:12.575911       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0203 10:38:12.576787       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0203 10:38:12.577483       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 10:38:12.577542       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0203 10:38:31.320025       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0203 10:38:31.321123       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0203 10:38:31.322255       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 10:38:31.322403       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0203 10:38:39.597853       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0203 10:38:39.598722       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0203 10:38:39.599526       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0203 10:38:39.599583       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0203 10:38:41.837494       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="36.439114ms"
	I0203 10:38:41.855535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="17.754195ms"
	I0203 10:38:41.855655       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="37.927µs"
	
	
	==> kube-proxy [536637dfeba3237afa2f8787772774892fe0f09b9db44f275ae7020f8d095ae3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0203 10:34:14.931836       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0203 10:34:14.969997       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.50"]
	E0203 10:34:14.970069       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 10:34:15.068416       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 10:34:15.068445       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 10:34:15.068470       1 server_linux.go:170] "Using iptables Proxier"
	I0203 10:34:15.075351       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 10:34:15.075626       1 server.go:497] "Version info" version="v1.32.1"
	I0203 10:34:15.075648       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 10:34:15.076964       1 config.go:199] "Starting service config controller"
	I0203 10:34:15.076999       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 10:34:15.077018       1 config.go:329] "Starting node config controller"
	I0203 10:34:15.077021       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 10:34:15.077531       1 config.go:105] "Starting endpoint slice config controller"
	I0203 10:34:15.077554       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 10:34:15.178822       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 10:34:15.178859       1 shared_informer.go:320] Caches are synced for service config
	I0203 10:34:15.179168       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c579e8bd8457c428af6ba1461a9fa6a791d6a89f3e8b05f33839e3d024a452b7] <==
	W0203 10:34:02.115608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0203 10:34:02.117036       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0203 10:34:02.115662       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0203 10:34:02.117119       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 10:34:02.115687       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0203 10:34:02.117214       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 10:34:02.115720       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0203 10:34:02.117258       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 10:34:02.912483       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0203 10:34:02.912711       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 10:34:02.978570       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0203 10:34:02.978633       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0203 10:34:02.988019       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0203 10:34:02.988269       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0203 10:34:03.093894       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0203 10:34:03.094010       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 10:34:03.214114       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0203 10:34:03.214303       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 10:34:03.278819       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0203 10:34:03.278990       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 10:34:03.280999       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0203 10:34:03.281037       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 10:34:03.597140       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0203 10:34:03.597271       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 10:34:05.682516       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 03 10:38:05 addons-106432 kubelet[1223]: E0203 10:38:05.396134    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738579085395581253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 10:38:05 addons-106432 kubelet[1223]: E0203 10:38:05.396234    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738579085395581253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 10:38:06 addons-106432 kubelet[1223]: I0203 10:38:06.064144    1223 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-pp748" secret="" err="secret \"gcp-auth\" not found"
	Feb 03 10:38:15 addons-106432 kubelet[1223]: E0203 10:38:15.398590    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738579095398209459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 10:38:15 addons-106432 kubelet[1223]: E0203 10:38:15.398647    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738579095398209459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 10:38:25 addons-106432 kubelet[1223]: E0203 10:38:25.401222    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738579105400921335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 10:38:25 addons-106432 kubelet[1223]: E0203 10:38:25.401257    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738579105400921335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 10:38:27 addons-106432 kubelet[1223]: I0203 10:38:27.064478    1223 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Feb 03 10:38:30 addons-106432 kubelet[1223]: I0203 10:38:30.063535    1223 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-668d6bf9bc-ds947" secret="" err="secret \"gcp-auth\" not found"
	Feb 03 10:38:35 addons-106432 kubelet[1223]: E0203 10:38:35.404145    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738579115403717889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 10:38:35 addons-106432 kubelet[1223]: E0203 10:38:35.404521    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738579115403717889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829468    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="2517fd36-efc5-4905-99a5-595749929505" containerName="csi-resizer"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829507    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="a4c18df3-1117-45c0-bb11-9780f719781f" containerName="task-pv-container"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829515    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="ab5452ba-a0f2-4e7a-b587-4f24e966f225" containerName="liveness-probe"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829521    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="ab5452ba-a0f2-4e7a-b587-4f24e966f225" containerName="csi-provisioner"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829527    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="6afe5106-62b7-4eb1-85e4-0d2f43d44f76" containerName="cloud-spanner-emulator"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829531    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="ab5452ba-a0f2-4e7a-b587-4f24e966f225" containerName="node-driver-registrar"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829536    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="ab5452ba-a0f2-4e7a-b587-4f24e966f225" containerName="hostpath"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829540    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="ab5452ba-a0f2-4e7a-b587-4f24e966f225" containerName="csi-snapshotter"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829545    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="6eb9211b-f0cf-4734-8317-66461c7fbde0" containerName="csi-attacher"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829550    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="ab5452ba-a0f2-4e7a-b587-4f24e966f225" containerName="csi-external-health-monitor-controller"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829556    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="45f6b937-262e-4d57-8915-d8ed863c5ed6" containerName="local-path-provisioner"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829560    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="df097690-2c76-499d-973f-f38247bc10df" containerName="volume-snapshot-controller"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.829564    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="3e4fcb7b-4c53-41cc-8fbb-98523e448429" containerName="volume-snapshot-controller"
	Feb 03 10:38:41 addons-106432 kubelet[1223]: I0203 10:38:41.996869    1223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8g4t\" (UniqueName: \"kubernetes.io/projected/a43eed8d-aee2-4b25-808e-2481d459a13b-kube-api-access-l8g4t\") pod \"hello-world-app-7d9564db4-qrw2h\" (UID: \"a43eed8d-aee2-4b25-808e-2481d459a13b\") " pod="default/hello-world-app-7d9564db4-qrw2h"
	
	
	==> storage-provisioner [7ebf2124d50c9acd72577cb9fc1cd18c1089dc8c18b9d6343119262e4c28a3a9] <==
	I0203 10:34:19.799389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0203 10:34:19.816919       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0203 10:34:19.817010       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0203 10:34:19.836649       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0203 10:34:19.837395       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a29587a-652d-4bd3-bd49-0e7942aff4ee", APIVersion:"v1", ResourceVersion:"822", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-106432_94496b00-8637-46ec-a78e-6767a5348c34 became leader
	I0203 10:34:19.837431       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-106432_94496b00-8637-46ec-a78e-6767a5348c34!
	I0203 10:34:19.938435       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-106432_94496b00-8637-46ec-a78e-6767a5348c34!
	
	
	==> storage-provisioner [b9692215ef05ab45e2e0a419ebfa8f604962454cc3644372af925098fc815367] <==
	I0203 10:34:16.208993       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0203 10:34:16.271858       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-106432 -n addons-106432
helpers_test.go:261: (dbg) Run:  kubectl --context addons-106432 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-qrw2h ingress-nginx-admission-create-qpwk2 ingress-nginx-admission-patch-ztd2k
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-106432 describe pod hello-world-app-7d9564db4-qrw2h ingress-nginx-admission-create-qpwk2 ingress-nginx-admission-patch-ztd2k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-106432 describe pod hello-world-app-7d9564db4-qrw2h ingress-nginx-admission-create-qpwk2 ingress-nginx-admission-patch-ztd2k: exit status 1 (67.290599ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-qrw2h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-106432/192.168.39.50
	Start Time:       Mon, 03 Feb 2025 10:38:41 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l8g4t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l8g4t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-qrw2h to addons-106432
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qpwk2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ztd2k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-106432 describe pod hello-world-app-7d9564db4-qrw2h ingress-nginx-admission-create-qpwk2 ingress-nginx-admission-patch-ztd2k: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-106432 addons disable ingress-dns --alsologtostderr -v=1: (1.134741183s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-106432 addons disable ingress --alsologtostderr -v=1: (7.69661978s)
--- FAIL: TestAddons/parallel/Ingress (153.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032338 ssh pgrep buildkitd: exit status 1 (196.173655ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image build -t localhost/my-image:functional-032338 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-032338 image build -t localhost/my-image:functional-032338 testdata/build --alsologtostderr: (4.400035862s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032338 image build -t localhost/my-image:functional-032338 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 43c61b54456
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-032338
--> ce4bd0356a8
Successfully tagged localhost/my-image:functional-032338
ce4bd0356a8e9075a7f2f156bfd4490844e7e223ee0ae49cb37641971715c9bc
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032338 image build -t localhost/my-image:functional-032338 testdata/build --alsologtostderr:
I0203 10:44:36.390966  125262 out.go:345] Setting OutFile to fd 1 ...
I0203 10:44:36.391260  125262 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:44:36.391275  125262 out.go:358] Setting ErrFile to fd 2...
I0203 10:44:36.391281  125262 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:44:36.391490  125262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
I0203 10:44:36.392109  125262 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 10:44:36.392661  125262 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 10:44:36.393036  125262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0203 10:44:36.393092  125262 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 10:44:36.409226  125262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
I0203 10:44:36.409649  125262 main.go:141] libmachine: () Calling .GetVersion
I0203 10:44:36.410238  125262 main.go:141] libmachine: Using API Version  1
I0203 10:44:36.410263  125262 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 10:44:36.410590  125262 main.go:141] libmachine: () Calling .GetMachineName
I0203 10:44:36.410803  125262 main.go:141] libmachine: (functional-032338) Calling .GetState
I0203 10:44:36.412648  125262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0203 10:44:36.412694  125262 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 10:44:36.427959  125262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42571
I0203 10:44:36.428330  125262 main.go:141] libmachine: () Calling .GetVersion
I0203 10:44:36.428826  125262 main.go:141] libmachine: Using API Version  1
I0203 10:44:36.428852  125262 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 10:44:36.429170  125262 main.go:141] libmachine: () Calling .GetMachineName
I0203 10:44:36.429401  125262 main.go:141] libmachine: (functional-032338) Calling .DriverName
I0203 10:44:36.429615  125262 ssh_runner.go:195] Run: systemctl --version
I0203 10:44:36.429642  125262 main.go:141] libmachine: (functional-032338) Calling .GetSSHHostname
I0203 10:44:36.432333  125262 main.go:141] libmachine: (functional-032338) DBG | domain functional-032338 has defined MAC address 52:54:00:53:ef:e6 in network mk-functional-032338
I0203 10:44:36.432663  125262 main.go:141] libmachine: (functional-032338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:ef:e6", ip: ""} in network mk-functional-032338: {Iface:virbr1 ExpiryTime:2025-02-03 11:41:33 +0000 UTC Type:0 Mac:52:54:00:53:ef:e6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-032338 Clientid:01:52:54:00:53:ef:e6}
I0203 10:44:36.432698  125262 main.go:141] libmachine: (functional-032338) DBG | domain functional-032338 has defined IP address 192.168.39.158 and MAC address 52:54:00:53:ef:e6 in network mk-functional-032338
I0203 10:44:36.432897  125262 main.go:141] libmachine: (functional-032338) Calling .GetSSHPort
I0203 10:44:36.433083  125262 main.go:141] libmachine: (functional-032338) Calling .GetSSHKeyPath
I0203 10:44:36.433229  125262 main.go:141] libmachine: (functional-032338) Calling .GetSSHUsername
I0203 10:44:36.433379  125262 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/functional-032338/id_rsa Username:docker}
I0203 10:44:36.526226  125262 build_images.go:161] Building image from path: /tmp/build.1747320874.tar
I0203 10:44:36.526283  125262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0203 10:44:36.540253  125262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1747320874.tar
I0203 10:44:36.545193  125262 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1747320874.tar: stat -c "%s %y" /var/lib/minikube/build/build.1747320874.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1747320874.tar': No such file or directory
I0203 10:44:36.545244  125262 ssh_runner.go:362] scp /tmp/build.1747320874.tar --> /var/lib/minikube/build/build.1747320874.tar (3072 bytes)
I0203 10:44:36.570634  125262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1747320874
I0203 10:44:36.579801  125262 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1747320874 -xf /var/lib/minikube/build/build.1747320874.tar
I0203 10:44:36.591386  125262 crio.go:315] Building image: /var/lib/minikube/build/build.1747320874
I0203 10:44:36.591451  125262 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-032338 /var/lib/minikube/build/build.1747320874 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0203 10:44:40.684237  125262 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-032338 /var/lib/minikube/build/build.1747320874 --cgroup-manager=cgroupfs: (4.092762193s)
I0203 10:44:40.684301  125262 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1747320874
I0203 10:44:40.713241  125262 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1747320874.tar
I0203 10:44:40.739716  125262 build_images.go:217] Built localhost/my-image:functional-032338 from /tmp/build.1747320874.tar
I0203 10:44:40.739751  125262 build_images.go:133] succeeded building to: functional-032338
I0203 10:44:40.739755  125262 build_images.go:134] failed building to: 
I0203 10:44:40.739811  125262 main.go:141] libmachine: Making call to close driver server
I0203 10:44:40.739828  125262 main.go:141] libmachine: (functional-032338) Calling .Close
I0203 10:44:40.740081  125262 main.go:141] libmachine: Successfully made call to close driver server
I0203 10:44:40.740100  125262 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 10:44:40.740114  125262 main.go:141] libmachine: Making call to close driver server
I0203 10:44:40.740122  125262 main.go:141] libmachine: (functional-032338) Calling .Close
I0203 10:44:40.740352  125262 main.go:141] libmachine: Successfully made call to close driver server
I0203 10:44:40.740376  125262 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 10:44:40.740375  125262 main.go:141] libmachine: (functional-032338) DBG | Closing plugin on server side
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image ls
functional_test.go:468: (dbg) Done: out/minikube-linux-amd64 -p functional-032338 image ls: (2.286166126s)
functional_test.go:463: expected "localhost/my-image:functional-032338" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (6.88s)

                                                
                                    
x
+
TestPreload (176.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-127579 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-127579 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m29.490445548s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-127579 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-127579 image pull gcr.io/k8s-minikube/busybox: (3.784206308s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-127579
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-127579: (7.306245826s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-127579 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0203 11:29:00.130946  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-127579 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m12.994493639s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-127579 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-02-03 11:29:26.621680804 +0000 UTC m=+3403.535100116
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-127579 -n test-preload-127579
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-127579 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-127579 logs -n 25: (1.092410981s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-728008 ssh -n                                                                 | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:14 UTC | 03 Feb 25 11:14 UTC |
	|         | multinode-728008-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-728008 ssh -n multinode-728008 sudo cat                                       | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:14 UTC | 03 Feb 25 11:14 UTC |
	|         | /home/docker/cp-test_multinode-728008-m03_multinode-728008.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-728008 cp multinode-728008-m03:/home/docker/cp-test.txt                       | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:14 UTC | 03 Feb 25 11:14 UTC |
	|         | multinode-728008-m02:/home/docker/cp-test_multinode-728008-m03_multinode-728008-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-728008 ssh -n                                                                 | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:14 UTC | 03 Feb 25 11:14 UTC |
	|         | multinode-728008-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-728008 ssh -n multinode-728008-m02 sudo cat                                   | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:14 UTC | 03 Feb 25 11:14 UTC |
	|         | /home/docker/cp-test_multinode-728008-m03_multinode-728008-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-728008 node stop m03                                                          | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:14 UTC | 03 Feb 25 11:14 UTC |
	| node    | multinode-728008 node start                                                             | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:14 UTC | 03 Feb 25 11:15 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-728008                                                                | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:15 UTC |                     |
	| stop    | -p multinode-728008                                                                     | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:15 UTC | 03 Feb 25 11:18 UTC |
	| start   | -p multinode-728008                                                                     | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:18 UTC | 03 Feb 25 11:20 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-728008                                                                | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:20 UTC |                     |
	| node    | multinode-728008 node delete                                                            | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:20 UTC | 03 Feb 25 11:20 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-728008 stop                                                                   | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:20 UTC | 03 Feb 25 11:23 UTC |
	| start   | -p multinode-728008                                                                     | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:23 UTC | 03 Feb 25 11:25 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-728008                                                                | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:25 UTC |                     |
	| start   | -p multinode-728008-m02                                                                 | multinode-728008-m02 | jenkins | v1.35.0 | 03 Feb 25 11:25 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-728008-m03                                                                 | multinode-728008-m03 | jenkins | v1.35.0 | 03 Feb 25 11:25 UTC | 03 Feb 25 11:26 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-728008                                                                 | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:26 UTC |                     |
	| delete  | -p multinode-728008-m03                                                                 | multinode-728008-m03 | jenkins | v1.35.0 | 03 Feb 25 11:26 UTC | 03 Feb 25 11:26 UTC |
	| delete  | -p multinode-728008                                                                     | multinode-728008     | jenkins | v1.35.0 | 03 Feb 25 11:26 UTC | 03 Feb 25 11:26 UTC |
	| start   | -p test-preload-127579                                                                  | test-preload-127579  | jenkins | v1.35.0 | 03 Feb 25 11:26 UTC | 03 Feb 25 11:28 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-127579 image pull                                                          | test-preload-127579  | jenkins | v1.35.0 | 03 Feb 25 11:28 UTC | 03 Feb 25 11:28 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-127579                                                                  | test-preload-127579  | jenkins | v1.35.0 | 03 Feb 25 11:28 UTC | 03 Feb 25 11:28 UTC |
	| start   | -p test-preload-127579                                                                  | test-preload-127579  | jenkins | v1.35.0 | 03 Feb 25 11:28 UTC | 03 Feb 25 11:29 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-127579 image list                                                          | test-preload-127579  | jenkins | v1.35.0 | 03 Feb 25 11:29 UTC | 03 Feb 25 11:29 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 11:28:13
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 11:28:13.449703  147863 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:28:13.449802  147863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:28:13.449810  147863 out.go:358] Setting ErrFile to fd 2...
	I0203 11:28:13.449814  147863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:28:13.450017  147863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:28:13.450592  147863 out.go:352] Setting JSON to false
	I0203 11:28:13.451462  147863 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7835,"bootTime":1738574258,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 11:28:13.451571  147863 start.go:139] virtualization: kvm guest
	I0203 11:28:13.453808  147863 out.go:177] * [test-preload-127579] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 11:28:13.455065  147863 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:28:13.455064  147863 notify.go:220] Checking for updates...
	I0203 11:28:13.457709  147863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:28:13.458909  147863 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:28:13.460069  147863 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:28:13.461175  147863 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 11:28:13.462350  147863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:28:13.463785  147863 config.go:182] Loaded profile config "test-preload-127579": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0203 11:28:13.464141  147863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:28:13.464206  147863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:28:13.479304  147863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37723
	I0203 11:28:13.479830  147863 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:28:13.480447  147863 main.go:141] libmachine: Using API Version  1
	I0203 11:28:13.480473  147863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:28:13.480833  147863 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:28:13.481019  147863 main.go:141] libmachine: (test-preload-127579) Calling .DriverName
	I0203 11:28:13.482627  147863 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0203 11:28:13.483753  147863 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:28:13.484089  147863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:28:13.484137  147863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:28:13.498909  147863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40311
	I0203 11:28:13.499357  147863 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:28:13.499825  147863 main.go:141] libmachine: Using API Version  1
	I0203 11:28:13.499852  147863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:28:13.500176  147863 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:28:13.500373  147863 main.go:141] libmachine: (test-preload-127579) Calling .DriverName
	I0203 11:28:13.535865  147863 out.go:177] * Using the kvm2 driver based on existing profile
	I0203 11:28:13.536924  147863 start.go:297] selected driver: kvm2
	I0203 11:28:13.536938  147863 start.go:901] validating driver "kvm2" against &{Name:test-preload-127579 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-127579
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:28:13.537048  147863 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:28:13.537680  147863 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:28:13.537757  147863 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 11:28:13.552347  147863 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 11:28:13.552678  147863 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:28:13.552711  147863 cni.go:84] Creating CNI manager for ""
	I0203 11:28:13.552750  147863 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:28:13.552815  147863 start.go:340] cluster config:
	{Name:test-preload-127579 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-127579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:28:13.552929  147863 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:28:13.554740  147863 out.go:177] * Starting "test-preload-127579" primary control-plane node in "test-preload-127579" cluster
	I0203 11:28:13.556037  147863 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0203 11:28:14.436046  147863 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0203 11:28:14.436156  147863 cache.go:56] Caching tarball of preloaded images
	I0203 11:28:14.436403  147863 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0203 11:28:14.438305  147863 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0203 11:28:14.439386  147863 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0203 11:28:14.979024  147863 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0203 11:28:28.174347  147863 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0203 11:28:28.174470  147863 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0203 11:28:29.147854  147863 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0203 11:28:29.148008  147863 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/config.json ...
	I0203 11:28:29.148274  147863 start.go:360] acquireMachinesLock for test-preload-127579: {Name:mk4d774b88f87fe0539ca3e30dd98aae8a4d5437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:28:29.148358  147863 start.go:364] duration metric: took 57.51µs to acquireMachinesLock for "test-preload-127579"
	I0203 11:28:29.148382  147863 start.go:96] Skipping create...Using existing machine configuration
	I0203 11:28:29.148390  147863 fix.go:54] fixHost starting: 
	I0203 11:28:29.148675  147863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:28:29.148727  147863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:28:29.163575  147863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0203 11:28:29.164033  147863 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:28:29.164557  147863 main.go:141] libmachine: Using API Version  1
	I0203 11:28:29.164579  147863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:28:29.164894  147863 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:28:29.165144  147863 main.go:141] libmachine: (test-preload-127579) Calling .DriverName
	I0203 11:28:29.165274  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetState
	I0203 11:28:29.167004  147863 fix.go:112] recreateIfNeeded on test-preload-127579: state=Stopped err=<nil>
	I0203 11:28:29.167031  147863 main.go:141] libmachine: (test-preload-127579) Calling .DriverName
	W0203 11:28:29.167160  147863 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 11:28:29.169353  147863 out.go:177] * Restarting existing kvm2 VM for "test-preload-127579" ...
	I0203 11:28:29.170689  147863 main.go:141] libmachine: (test-preload-127579) Calling .Start
	I0203 11:28:29.170879  147863 main.go:141] libmachine: (test-preload-127579) starting domain...
	I0203 11:28:29.170900  147863 main.go:141] libmachine: (test-preload-127579) ensuring networks are active...
	I0203 11:28:29.171598  147863 main.go:141] libmachine: (test-preload-127579) Ensuring network default is active
	I0203 11:28:29.171926  147863 main.go:141] libmachine: (test-preload-127579) Ensuring network mk-test-preload-127579 is active
	I0203 11:28:29.172257  147863 main.go:141] libmachine: (test-preload-127579) getting domain XML...
	I0203 11:28:29.172863  147863 main.go:141] libmachine: (test-preload-127579) creating domain...
	I0203 11:28:30.376349  147863 main.go:141] libmachine: (test-preload-127579) waiting for IP...
	I0203 11:28:30.377263  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:30.377671  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:30.377800  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:30.377701  147948 retry.go:31] will retry after 225.418564ms: waiting for domain to come up
	I0203 11:28:30.605309  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:30.605671  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:30.605701  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:30.605645  147948 retry.go:31] will retry after 366.743936ms: waiting for domain to come up
	I0203 11:28:30.974457  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:30.974867  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:30.974911  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:30.974846  147948 retry.go:31] will retry after 420.969336ms: waiting for domain to come up
	I0203 11:28:31.397512  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:31.397908  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:31.397950  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:31.397882  147948 retry.go:31] will retry after 577.487241ms: waiting for domain to come up
	I0203 11:28:31.976689  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:31.977087  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:31.977116  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:31.977042  147948 retry.go:31] will retry after 693.278383ms: waiting for domain to come up
	I0203 11:28:32.671655  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:32.672835  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:32.672862  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:32.672801  147948 retry.go:31] will retry after 682.856375ms: waiting for domain to come up
	I0203 11:28:33.358094  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:33.358510  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:33.358539  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:33.358481  147948 retry.go:31] will retry after 756.245292ms: waiting for domain to come up
	I0203 11:28:34.116740  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:34.117216  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:34.117239  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:34.117179  147948 retry.go:31] will retry after 1.305843664s: waiting for domain to come up
	I0203 11:28:35.425180  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:35.425654  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:35.425685  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:35.425585  147948 retry.go:31] will retry after 1.33354273s: waiting for domain to come up
	I0203 11:28:36.760412  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:36.760872  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:36.760905  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:36.760839  147948 retry.go:31] will retry after 1.837130124s: waiting for domain to come up
	I0203 11:28:38.601116  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:38.601638  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:38.601678  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:38.601626  147948 retry.go:31] will retry after 1.943733385s: waiting for domain to come up
	I0203 11:28:40.547457  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:40.548063  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:40.548096  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:40.548018  147948 retry.go:31] will retry after 3.630793426s: waiting for domain to come up
	I0203 11:28:44.180056  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:44.180472  147863 main.go:141] libmachine: (test-preload-127579) DBG | unable to find current IP address of domain test-preload-127579 in network mk-test-preload-127579
	I0203 11:28:44.180491  147863 main.go:141] libmachine: (test-preload-127579) DBG | I0203 11:28:44.180440  147948 retry.go:31] will retry after 4.049513969s: waiting for domain to come up
	I0203 11:28:48.231084  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.231538  147863 main.go:141] libmachine: (test-preload-127579) found domain IP: 192.168.39.62
	I0203 11:28:48.231564  147863 main.go:141] libmachine: (test-preload-127579) reserving static IP address...
	I0203 11:28:48.231576  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has current primary IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.232012  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "test-preload-127579", mac: "52:54:00:e2:0b:bf", ip: "192.168.39.62"} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:48.232050  147863 main.go:141] libmachine: (test-preload-127579) reserved static IP address 192.168.39.62 for domain test-preload-127579
	I0203 11:28:48.232070  147863 main.go:141] libmachine: (test-preload-127579) DBG | skip adding static IP to network mk-test-preload-127579 - found existing host DHCP lease matching {name: "test-preload-127579", mac: "52:54:00:e2:0b:bf", ip: "192.168.39.62"}
	I0203 11:28:48.232085  147863 main.go:141] libmachine: (test-preload-127579) waiting for SSH...
	I0203 11:28:48.232096  147863 main.go:141] libmachine: (test-preload-127579) DBG | Getting to WaitForSSH function...
	I0203 11:28:48.233991  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.234298  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:48.234324  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.234448  147863 main.go:141] libmachine: (test-preload-127579) DBG | Using SSH client type: external
	I0203 11:28:48.234484  147863 main.go:141] libmachine: (test-preload-127579) DBG | Using SSH private key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/test-preload-127579/id_rsa (-rw-------)
	I0203 11:28:48.234512  147863 main.go:141] libmachine: (test-preload-127579) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20354-109432/.minikube/machines/test-preload-127579/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0203 11:28:48.234527  147863 main.go:141] libmachine: (test-preload-127579) DBG | About to run SSH command:
	I0203 11:28:48.234539  147863 main.go:141] libmachine: (test-preload-127579) DBG | exit 0
	I0203 11:28:48.357799  147863 main.go:141] libmachine: (test-preload-127579) DBG | SSH cmd err, output: <nil>: 
	I0203 11:28:48.358146  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetConfigRaw
	I0203 11:28:48.358852  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetIP
	I0203 11:28:48.361185  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.361476  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:48.361502  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.361726  147863 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/config.json ...
	I0203 11:28:48.361963  147863 machine.go:93] provisionDockerMachine start ...
	I0203 11:28:48.361984  147863 main.go:141] libmachine: (test-preload-127579) Calling .DriverName
	I0203 11:28:48.362236  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHHostname
	I0203 11:28:48.364228  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.364514  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:48.364541  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.364648  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHPort
	I0203 11:28:48.364819  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:48.364985  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:48.365092  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHUsername
	I0203 11:28:48.365269  147863 main.go:141] libmachine: Using SSH client type: native
	I0203 11:28:48.365525  147863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0203 11:28:48.365541  147863 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:28:48.465967  147863 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 11:28:48.466067  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetMachineName
	I0203 11:28:48.466380  147863 buildroot.go:166] provisioning hostname "test-preload-127579"
	I0203 11:28:48.466409  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetMachineName
	I0203 11:28:48.466574  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHHostname
	I0203 11:28:48.469124  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.469467  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:48.469489  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.469620  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHPort
	I0203 11:28:48.469809  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:48.469935  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:48.470076  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHUsername
	I0203 11:28:48.470265  147863 main.go:141] libmachine: Using SSH client type: native
	I0203 11:28:48.470456  147863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0203 11:28:48.470473  147863 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-127579 && echo "test-preload-127579" | sudo tee /etc/hostname
	I0203 11:28:48.587064  147863 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-127579
	
	I0203 11:28:48.587102  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHHostname
	I0203 11:28:48.589739  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.590070  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:48.590100  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.590225  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHPort
	I0203 11:28:48.590411  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:48.590537  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:48.590643  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHUsername
	I0203 11:28:48.590777  147863 main.go:141] libmachine: Using SSH client type: native
	I0203 11:28:48.590937  147863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0203 11:28:48.590951  147863 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-127579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-127579/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-127579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:28:48.702541  147863 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:28:48.702580  147863 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20354-109432/.minikube CaCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20354-109432/.minikube}
	I0203 11:28:48.702613  147863 buildroot.go:174] setting up certificates
	I0203 11:28:48.702626  147863 provision.go:84] configureAuth start
	I0203 11:28:48.702640  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetMachineName
	I0203 11:28:48.702924  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetIP
	I0203 11:28:48.705188  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.705501  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:48.705527  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.705670  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHHostname
	I0203 11:28:48.707841  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.708236  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:48.708278  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.708374  147863 provision.go:143] copyHostCerts
	I0203 11:28:48.708448  147863 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem, removing ...
	I0203 11:28:48.708467  147863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem
	I0203 11:28:48.708539  147863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem (1078 bytes)
	I0203 11:28:48.708625  147863 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem, removing ...
	I0203 11:28:48.708633  147863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem
	I0203 11:28:48.708658  147863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem (1123 bytes)
	I0203 11:28:48.708711  147863 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem, removing ...
	I0203 11:28:48.708718  147863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem
	I0203 11:28:48.708742  147863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem (1679 bytes)
	I0203 11:28:48.708788  147863 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem org=jenkins.test-preload-127579 san=[127.0.0.1 192.168.39.62 localhost minikube test-preload-127579]
	I0203 11:28:48.823621  147863 provision.go:177] copyRemoteCerts
	I0203 11:28:48.823684  147863 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:28:48.823736  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHHostname
	I0203 11:28:48.826400  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.826696  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:48.826722  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.826881  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHPort
	I0203 11:28:48.827053  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:48.827194  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHUsername
	I0203 11:28:48.827283  147863 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/test-preload-127579/id_rsa Username:docker}
	I0203 11:28:48.907791  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0203 11:28:48.929963  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 11:28:48.951761  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 11:28:48.973582  147863 provision.go:87] duration metric: took 270.938384ms to configureAuth
	I0203 11:28:48.973615  147863 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:28:48.973783  147863 config.go:182] Loaded profile config "test-preload-127579": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0203 11:28:48.973873  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHHostname
	I0203 11:28:48.976620  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.977002  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:48.977034  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:48.977207  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHPort
	I0203 11:28:48.977406  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:48.977566  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:48.977707  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHUsername
	I0203 11:28:48.977860  147863 main.go:141] libmachine: Using SSH client type: native
	I0203 11:28:48.978098  147863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0203 11:28:48.978120  147863 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0203 11:28:49.189553  147863 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0203 11:28:49.189586  147863 machine.go:96] duration metric: took 827.60734ms to provisionDockerMachine
	I0203 11:28:49.189599  147863 start.go:293] postStartSetup for "test-preload-127579" (driver="kvm2")
	I0203 11:28:49.189609  147863 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:28:49.189627  147863 main.go:141] libmachine: (test-preload-127579) Calling .DriverName
	I0203 11:28:49.189938  147863 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:28:49.189966  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHHostname
	I0203 11:28:49.192441  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:49.192828  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:49.192853  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:49.193012  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHPort
	I0203 11:28:49.193191  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:49.193350  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHUsername
	I0203 11:28:49.193467  147863 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/test-preload-127579/id_rsa Username:docker}
	I0203 11:28:49.272260  147863 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:28:49.276056  147863 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:28:49.276079  147863 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/addons for local assets ...
	I0203 11:28:49.276147  147863 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/files for local assets ...
	I0203 11:28:49.276237  147863 filesync.go:149] local asset: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem -> 1166062.pem in /etc/ssl/certs
	I0203 11:28:49.276356  147863 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:28:49.285058  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:28:49.306838  147863 start.go:296] duration metric: took 117.224408ms for postStartSetup
	I0203 11:28:49.306911  147863 fix.go:56] duration metric: took 20.158521665s for fixHost
	I0203 11:28:49.306941  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHHostname
	I0203 11:28:49.309712  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:49.310055  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:49.310086  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:49.310313  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHPort
	I0203 11:28:49.310490  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:49.310618  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:49.310743  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHUsername
	I0203 11:28:49.310907  147863 main.go:141] libmachine: Using SSH client type: native
	I0203 11:28:49.311081  147863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0203 11:28:49.311091  147863 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:28:49.414353  147863 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738582129.376926061
	
	I0203 11:28:49.414377  147863 fix.go:216] guest clock: 1738582129.376926061
	I0203 11:28:49.414384  147863 fix.go:229] Guest: 2025-02-03 11:28:49.376926061 +0000 UTC Remote: 2025-02-03 11:28:49.306918895 +0000 UTC m=+35.895439634 (delta=70.007166ms)
	I0203 11:28:49.414405  147863 fix.go:200] guest clock delta is within tolerance: 70.007166ms
	I0203 11:28:49.414410  147863 start.go:83] releasing machines lock for "test-preload-127579", held for 20.266038426s
	I0203 11:28:49.414431  147863 main.go:141] libmachine: (test-preload-127579) Calling .DriverName
	I0203 11:28:49.414736  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetIP
	I0203 11:28:49.417327  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:49.417732  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:49.417769  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:49.417867  147863 main.go:141] libmachine: (test-preload-127579) Calling .DriverName
	I0203 11:28:49.418370  147863 main.go:141] libmachine: (test-preload-127579) Calling .DriverName
	I0203 11:28:49.418521  147863 main.go:141] libmachine: (test-preload-127579) Calling .DriverName
	I0203 11:28:49.418633  147863 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 11:28:49.418670  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHHostname
	I0203 11:28:49.418771  147863 ssh_runner.go:195] Run: cat /version.json
	I0203 11:28:49.418792  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHHostname
	I0203 11:28:49.421266  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:49.421309  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:49.421621  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:49.421679  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:49.421706  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:49.421744  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:49.421769  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHPort
	I0203 11:28:49.421978  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHPort
	I0203 11:28:49.422016  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:49.422195  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHUsername
	I0203 11:28:49.422212  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:28:49.422392  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHUsername
	I0203 11:28:49.422387  147863 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/test-preload-127579/id_rsa Username:docker}
	I0203 11:28:49.422512  147863 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/test-preload-127579/id_rsa Username:docker}
	I0203 11:28:49.523421  147863 ssh_runner.go:195] Run: systemctl --version
	I0203 11:28:49.529120  147863 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0203 11:28:49.669540  147863 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 11:28:49.675389  147863 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:28:49.675478  147863 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:28:49.691183  147863 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:28:49.691210  147863 start.go:495] detecting cgroup driver to use...
	I0203 11:28:49.691274  147863 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:28:49.708147  147863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:28:49.722143  147863 docker.go:217] disabling cri-docker service (if available) ...
	I0203 11:28:49.722219  147863 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0203 11:28:49.735249  147863 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0203 11:28:49.748952  147863 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0203 11:28:49.863212  147863 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0203 11:28:50.002213  147863 docker.go:233] disabling docker service ...
	I0203 11:28:50.002288  147863 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0203 11:28:50.016343  147863 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0203 11:28:50.028928  147863 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0203 11:28:50.161862  147863 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0203 11:28:50.286765  147863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0203 11:28:50.300634  147863 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:28:50.318355  147863 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0203 11:28:50.318425  147863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:28:50.328269  147863 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0203 11:28:50.328343  147863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:28:50.337935  147863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:28:50.347502  147863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:28:50.357775  147863 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:28:50.368718  147863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:28:50.378297  147863 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:28:50.396068  147863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:28:50.406035  147863 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:28:50.414830  147863 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:28:50.414877  147863 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:28:50.427166  147863 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:28:50.435586  147863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:28:50.546842  147863 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0203 11:28:50.634531  147863 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0203 11:28:50.634611  147863 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0203 11:28:50.639079  147863 start.go:563] Will wait 60s for crictl version
	I0203 11:28:50.639132  147863 ssh_runner.go:195] Run: which crictl
	I0203 11:28:50.642633  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:28:50.679684  147863 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0203 11:28:50.679800  147863 ssh_runner.go:195] Run: crio --version
	I0203 11:28:50.709831  147863 ssh_runner.go:195] Run: crio --version
	I0203 11:28:50.737399  147863 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0203 11:28:50.738567  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetIP
	I0203 11:28:50.741347  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:50.741702  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:28:50.741724  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:28:50.741926  147863 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0203 11:28:50.745889  147863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:28:50.757966  147863 kubeadm.go:883] updating cluster {Name:test-preload-127579 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-127579 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 11:28:50.758102  147863 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0203 11:28:50.758156  147863 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:28:50.794227  147863 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0203 11:28:50.794290  147863 ssh_runner.go:195] Run: which lz4
	I0203 11:28:50.798120  147863 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0203 11:28:50.801791  147863 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 11:28:50.801824  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0203 11:28:52.173471  147863 crio.go:462] duration metric: took 1.375374257s to copy over tarball
	I0203 11:28:52.173550  147863 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0203 11:28:54.581713  147863 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.408130104s)
	I0203 11:28:54.581754  147863 crio.go:469] duration metric: took 2.408246838s to extract the tarball
	I0203 11:28:54.581765  147863 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0203 11:28:54.622020  147863 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:28:54.661605  147863 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0203 11:28:54.661634  147863 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0203 11:28:54.661706  147863 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:28:54.661722  147863 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0203 11:28:54.661749  147863 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0203 11:28:54.661760  147863 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0203 11:28:54.661734  147863 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0203 11:28:54.661790  147863 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0203 11:28:54.661798  147863 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0203 11:28:54.661710  147863 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0203 11:28:54.663332  147863 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0203 11:28:54.663347  147863 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0203 11:28:54.663403  147863 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0203 11:28:54.663413  147863 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:28:54.663333  147863 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0203 11:28:54.663335  147863 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0203 11:28:54.663337  147863 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0203 11:28:54.663332  147863 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0203 11:28:54.881472  147863 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0203 11:28:54.884251  147863 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0203 11:28:54.884503  147863 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0203 11:28:54.886314  147863 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0203 11:28:54.891028  147863 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0203 11:28:54.893983  147863 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0203 11:28:54.908956  147863 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0203 11:28:54.967414  147863 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0203 11:28:54.967487  147863 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0203 11:28:54.967537  147863 ssh_runner.go:195] Run: which crictl
	I0203 11:28:55.088982  147863 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0203 11:28:55.089028  147863 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0203 11:28:55.089073  147863 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0203 11:28:55.089114  147863 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0203 11:28:55.089130  147863 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0203 11:28:55.089150  147863 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0203 11:28:55.089154  147863 ssh_runner.go:195] Run: which crictl
	I0203 11:28:55.089170  147863 ssh_runner.go:195] Run: which crictl
	I0203 11:28:55.089077  147863 ssh_runner.go:195] Run: which crictl
	I0203 11:28:55.089236  147863 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0203 11:28:55.089255  147863 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0203 11:28:55.089275  147863 ssh_runner.go:195] Run: which crictl
	I0203 11:28:55.089291  147863 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0203 11:28:55.089312  147863 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0203 11:28:55.089336  147863 ssh_runner.go:195] Run: which crictl
	I0203 11:28:55.089351  147863 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0203 11:28:55.089376  147863 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0203 11:28:55.089407  147863 ssh_runner.go:195] Run: which crictl
	I0203 11:28:55.089414  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0203 11:28:55.102667  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0203 11:28:55.102738  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0203 11:28:55.102771  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0203 11:28:55.102775  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0203 11:28:55.153931  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0203 11:28:55.153965  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0203 11:28:55.154062  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0203 11:28:55.227663  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0203 11:28:55.227732  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0203 11:28:55.227805  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0203 11:28:55.232318  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0203 11:28:55.311708  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0203 11:28:55.311796  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0203 11:28:55.311842  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0203 11:28:55.356563  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0203 11:28:55.356594  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0203 11:28:55.356718  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0203 11:28:55.360941  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0203 11:28:55.465249  147863 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0203 11:28:55.465335  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0203 11:28:55.465383  147863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0203 11:28:55.471548  147863 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0203 11:28:55.512695  147863 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0203 11:28:55.512791  147863 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0203 11:28:55.512823  147863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0203 11:28:55.512887  147863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0203 11:28:55.512939  147863 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0203 11:28:55.512990  147863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0203 11:28:55.518177  147863 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0203 11:28:55.518222  147863 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0203 11:28:55.518242  147863 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0203 11:28:55.518273  147863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0203 11:28:55.518287  147863 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0203 11:28:55.547534  147863 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0203 11:28:55.547694  147863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0203 11:28:55.575924  147863 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0203 11:28:55.575962  147863 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0203 11:28:55.576005  147863 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0203 11:28:55.576013  147863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0203 11:28:55.576076  147863 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0203 11:28:55.820888  147863 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:28:58.306284  147863 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.787968922s)
	I0203 11:28:58.306330  147863 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: (2.788039751s)
	I0203 11:28:58.306355  147863 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0203 11:28:58.306332  147863 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0203 11:28:58.306384  147863 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0203 11:28:58.306419  147863 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.730394889s)
	I0203 11:28:58.306435  147863 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0203 11:28:58.306393  147863 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.758667357s)
	I0203 11:28:58.306461  147863 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0203 11:28:58.306472  147863 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0203 11:28:58.306459  147863 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.485544723s)
	I0203 11:28:58.650499  147863 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0203 11:28:58.650555  147863 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0203 11:28:58.650609  147863 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0203 11:28:59.293718  147863 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0203 11:28:59.293778  147863 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0203 11:28:59.293855  147863 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0203 11:29:00.037371  147863 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0203 11:29:00.037428  147863 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0203 11:29:00.037493  147863 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0203 11:29:02.182670  147863 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.145142881s)
	I0203 11:29:02.182705  147863 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0203 11:29:02.182739  147863 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0203 11:29:02.182811  147863 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0203 11:29:02.623988  147863 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0203 11:29:02.624050  147863 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0203 11:29:02.624104  147863 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0203 11:29:03.466995  147863 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0203 11:29:03.467036  147863 cache_images.go:123] Successfully loaded all cached images
	I0203 11:29:03.467042  147863 cache_images.go:92] duration metric: took 8.805396323s to LoadCachedImages
	I0203 11:29:03.467058  147863 kubeadm.go:934] updating node { 192.168.39.62 8443 v1.24.4 crio true true} ...
	I0203 11:29:03.467214  147863 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-127579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-127579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:29:03.467302  147863 ssh_runner.go:195] Run: crio config
	I0203 11:29:03.512497  147863 cni.go:84] Creating CNI manager for ""
	I0203 11:29:03.512517  147863 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:29:03.512539  147863 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 11:29:03.512558  147863 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.62 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-127579 NodeName:test-preload-127579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 11:29:03.512680  147863 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-127579"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 11:29:03.512743  147863 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0203 11:29:03.522772  147863 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 11:29:03.522850  147863 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 11:29:03.532341  147863 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0203 11:29:03.548552  147863 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:29:03.564407  147863 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0203 11:29:03.580677  147863 ssh_runner.go:195] Run: grep 192.168.39.62	control-plane.minikube.internal$ /etc/hosts
	I0203 11:29:03.584263  147863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:29:03.596656  147863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:29:03.717156  147863 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:29:03.733897  147863 certs.go:68] Setting up /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579 for IP: 192.168.39.62
	I0203 11:29:03.733922  147863 certs.go:194] generating shared ca certs ...
	I0203 11:29:03.733952  147863 certs.go:226] acquiring lock for ca certs: {Name:mkceafe81f89678b7cbc2a7f6faab4e784fcb207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:29:03.734135  147863 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key
	I0203 11:29:03.734186  147863 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key
	I0203 11:29:03.734197  147863 certs.go:256] generating profile certs ...
	I0203 11:29:03.734273  147863 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/client.key
	I0203 11:29:03.734332  147863 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/apiserver.key.17a938e2
	I0203 11:29:03.734368  147863 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/proxy-client.key
	I0203 11:29:03.734508  147863 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem (1338 bytes)
	W0203 11:29:03.734555  147863 certs.go:480] ignoring /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606_empty.pem, impossibly tiny 0 bytes
	I0203 11:29:03.734570  147863 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 11:29:03.734597  147863 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem (1078 bytes)
	I0203 11:29:03.734620  147863 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem (1123 bytes)
	I0203 11:29:03.734640  147863 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem (1679 bytes)
	I0203 11:29:03.734695  147863 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:29:03.735366  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:29:03.780680  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 11:29:03.817943  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:29:03.842161  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 11:29:03.877485  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0203 11:29:03.903990  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 11:29:03.943544  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:29:03.965943  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 11:29:03.987827  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem --> /usr/share/ca-certificates/116606.pem (1338 bytes)
	I0203 11:29:04.009018  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /usr/share/ca-certificates/1166062.pem (1708 bytes)
	I0203 11:29:04.030900  147863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:29:04.052473  147863 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:29:04.068422  147863 ssh_runner.go:195] Run: openssl version
	I0203 11:29:04.074151  147863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:29:04.084814  147863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:29:04.089137  147863 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:29:04.089212  147863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:29:04.095040  147863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:29:04.105690  147863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116606.pem && ln -fs /usr/share/ca-certificates/116606.pem /etc/ssl/certs/116606.pem"
	I0203 11:29:04.116155  147863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116606.pem
	I0203 11:29:04.120368  147863 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:41 /usr/share/ca-certificates/116606.pem
	I0203 11:29:04.120438  147863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116606.pem
	I0203 11:29:04.126162  147863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116606.pem /etc/ssl/certs/51391683.0"
	I0203 11:29:04.136667  147863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1166062.pem && ln -fs /usr/share/ca-certificates/1166062.pem /etc/ssl/certs/1166062.pem"
	I0203 11:29:04.147160  147863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1166062.pem
	I0203 11:29:04.151636  147863 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:41 /usr/share/ca-certificates/1166062.pem
	I0203 11:29:04.151690  147863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1166062.pem
	I0203 11:29:04.157301  147863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1166062.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:29:04.168172  147863 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:29:04.173323  147863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0203 11:29:04.179197  147863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0203 11:29:04.185632  147863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0203 11:29:04.192138  147863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0203 11:29:04.198021  147863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0203 11:29:04.203728  147863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0203 11:29:04.209614  147863 kubeadm.go:392] StartCluster: {Name:test-preload-127579 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-127579 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:29:04.209704  147863 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 11:29:04.209752  147863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:29:04.252498  147863 cri.go:89] found id: ""
	I0203 11:29:04.252595  147863 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 11:29:04.262648  147863 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0203 11:29:04.262670  147863 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0203 11:29:04.262723  147863 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0203 11:29:04.272374  147863 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0203 11:29:04.272887  147863 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-127579" does not appear in /home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:29:04.272992  147863 kubeconfig.go:62] /home/jenkins/minikube-integration/20354-109432/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-127579" cluster setting kubeconfig missing "test-preload-127579" context setting]
	I0203 11:29:04.273252  147863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/kubeconfig: {Name:mkcb7c4c45c6b828504faaa9fea59b0bb0855286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:29:04.273873  147863 kapi.go:59] client config for test-preload-127579: &rest.Config{Host:"https://192.168.39.62:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/client.crt", KeyFile:"/home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/client.key", CAFile:"/home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 11:29:04.274565  147863 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0203 11:29:04.283880  147863 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.62
	I0203 11:29:04.283916  147863 kubeadm.go:1160] stopping kube-system containers ...
	I0203 11:29:04.283929  147863 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0203 11:29:04.283977  147863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:29:04.317040  147863 cri.go:89] found id: ""
	I0203 11:29:04.317127  147863 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 11:29:04.333978  147863 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:29:04.344031  147863 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:29:04.344056  147863 kubeadm.go:157] found existing configuration files:
	
	I0203 11:29:04.344108  147863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:29:04.353662  147863 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:29:04.353734  147863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:29:04.363494  147863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:29:04.373198  147863 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:29:04.373256  147863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:29:04.382995  147863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:29:04.392282  147863 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:29:04.392343  147863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:29:04.402290  147863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:29:04.411608  147863 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:29:04.411680  147863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:29:04.421251  147863 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:29:04.430817  147863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:29:04.521742  147863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:29:05.685221  147863 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.163433467s)
	I0203 11:29:05.685262  147863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:29:05.932056  147863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:29:05.989561  147863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:29:06.051102  147863 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:29:06.051195  147863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:29:06.551462  147863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:29:07.051830  147863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:29:07.070386  147863 api_server.go:72] duration metric: took 1.019283522s to wait for apiserver process to appear ...
	I0203 11:29:07.070422  147863 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:29:07.070448  147863 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0203 11:29:07.070936  147863 api_server.go:269] stopped: https://192.168.39.62:8443/healthz: Get "https://192.168.39.62:8443/healthz": dial tcp 192.168.39.62:8443: connect: connection refused
	I0203 11:29:07.570570  147863 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0203 11:29:07.571201  147863 api_server.go:269] stopped: https://192.168.39.62:8443/healthz: Get "https://192.168.39.62:8443/healthz": dial tcp 192.168.39.62:8443: connect: connection refused
	I0203 11:29:08.070838  147863 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0203 11:29:10.793812  147863 api_server.go:279] https://192.168.39.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 11:29:10.793852  147863 api_server.go:103] status: https://192.168.39.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 11:29:10.793873  147863 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0203 11:29:10.808584  147863 api_server.go:279] https://192.168.39.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 11:29:10.808612  147863 api_server.go:103] status: https://192.168.39.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 11:29:11.071142  147863 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0203 11:29:11.077055  147863 api_server.go:279] https://192.168.39.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 11:29:11.077088  147863 api_server.go:103] status: https://192.168.39.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 11:29:11.570713  147863 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0203 11:29:11.576034  147863 api_server.go:279] https://192.168.39.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 11:29:11.576060  147863 api_server.go:103] status: https://192.168.39.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 11:29:12.070682  147863 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0203 11:29:12.076336  147863 api_server.go:279] https://192.168.39.62:8443/healthz returned 200:
	ok
	I0203 11:29:12.085151  147863 api_server.go:141] control plane version: v1.24.4
	I0203 11:29:12.085190  147863 api_server.go:131] duration metric: took 5.014759693s to wait for apiserver health ...
	I0203 11:29:12.085203  147863 cni.go:84] Creating CNI manager for ""
	I0203 11:29:12.085213  147863 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:29:12.087114  147863 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0203 11:29:12.088469  147863 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 11:29:12.118327  147863 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0203 11:29:12.179523  147863 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:29:12.179622  147863 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0203 11:29:12.179642  147863 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0203 11:29:12.194064  147863 system_pods.go:59] 7 kube-system pods found
	I0203 11:29:12.194101  147863 system_pods.go:61] "coredns-6d4b75cb6d-dgkpw" [6107dc65-ef5a-41a9-8608-356d4256f652] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0203 11:29:12.194107  147863 system_pods.go:61] "etcd-test-preload-127579" [73fb2c98-844b-4b8e-b4be-b36f38803f83] Running
	I0203 11:29:12.194112  147863 system_pods.go:61] "kube-apiserver-test-preload-127579" [406b5efc-7804-47da-9220-8e9e36f90d5b] Running
	I0203 11:29:12.194116  147863 system_pods.go:61] "kube-controller-manager-test-preload-127579" [cc60a058-5403-4687-9955-b48a14534749] Running
	I0203 11:29:12.194122  147863 system_pods.go:61] "kube-proxy-qs2mr" [5618adf0-ccbb-4692-989b-9e6b2b09f35c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0203 11:29:12.194125  147863 system_pods.go:61] "kube-scheduler-test-preload-127579" [b82199e4-cc0b-4328-9d28-0501757546e8] Running
	I0203 11:29:12.194133  147863 system_pods.go:61] "storage-provisioner" [f35d43c7-4f21-445b-a727-c573d777a0ca] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0203 11:29:12.194138  147863 system_pods.go:74] duration metric: took 14.593687ms to wait for pod list to return data ...
	I0203 11:29:12.194147  147863 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:29:12.198813  147863 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:29:12.198844  147863 node_conditions.go:123] node cpu capacity is 2
	I0203 11:29:12.198855  147863 node_conditions.go:105] duration metric: took 4.704538ms to run NodePressure ...
	I0203 11:29:12.198875  147863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:29:12.506565  147863 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0203 11:29:12.514199  147863 kubeadm.go:739] kubelet initialised
	I0203 11:29:12.514222  147863 kubeadm.go:740] duration metric: took 7.625433ms waiting for restarted kubelet to initialise ...
	I0203 11:29:12.514231  147863 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:29:12.519404  147863 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-dgkpw" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:12.524132  147863 pod_ready.go:98] node "test-preload-127579" hosting pod "coredns-6d4b75cb6d-dgkpw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:12.524175  147863 pod_ready.go:82] duration metric: took 4.744163ms for pod "coredns-6d4b75cb6d-dgkpw" in "kube-system" namespace to be "Ready" ...
	E0203 11:29:12.524186  147863 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-127579" hosting pod "coredns-6d4b75cb6d-dgkpw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:12.524193  147863 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:12.532708  147863 pod_ready.go:98] node "test-preload-127579" hosting pod "etcd-test-preload-127579" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:12.532741  147863 pod_ready.go:82] duration metric: took 8.538927ms for pod "etcd-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	E0203 11:29:12.532753  147863 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-127579" hosting pod "etcd-test-preload-127579" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:12.532761  147863 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:12.537449  147863 pod_ready.go:98] node "test-preload-127579" hosting pod "kube-apiserver-test-preload-127579" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:12.537474  147863 pod_ready.go:82] duration metric: took 4.702373ms for pod "kube-apiserver-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	E0203 11:29:12.537484  147863 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-127579" hosting pod "kube-apiserver-test-preload-127579" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:12.537490  147863 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:12.586186  147863 pod_ready.go:98] node "test-preload-127579" hosting pod "kube-controller-manager-test-preload-127579" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:12.586215  147863 pod_ready.go:82] duration metric: took 48.715849ms for pod "kube-controller-manager-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	E0203 11:29:12.586225  147863 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-127579" hosting pod "kube-controller-manager-test-preload-127579" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:12.586232  147863 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qs2mr" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:12.985564  147863 pod_ready.go:98] node "test-preload-127579" hosting pod "kube-proxy-qs2mr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:12.985596  147863 pod_ready.go:82] duration metric: took 399.354277ms for pod "kube-proxy-qs2mr" in "kube-system" namespace to be "Ready" ...
	E0203 11:29:12.985610  147863 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-127579" hosting pod "kube-proxy-qs2mr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:12.985620  147863 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:13.384451  147863 pod_ready.go:98] node "test-preload-127579" hosting pod "kube-scheduler-test-preload-127579" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:13.384485  147863 pod_ready.go:82] duration metric: took 398.856936ms for pod "kube-scheduler-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	E0203 11:29:13.384499  147863 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-127579" hosting pod "kube-scheduler-test-preload-127579" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:13.384510  147863 pod_ready.go:39] duration metric: took 870.268891ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:29:13.384540  147863 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 11:29:13.397727  147863 ops.go:34] apiserver oom_adj: -16
	I0203 11:29:13.397749  147863 kubeadm.go:597] duration metric: took 9.135073061s to restartPrimaryControlPlane
	I0203 11:29:13.397760  147863 kubeadm.go:394] duration metric: took 9.188156803s to StartCluster
	I0203 11:29:13.397779  147863 settings.go:142] acquiring lock: {Name:mk7f08542cc4ae303b222901a9d369cc0753d51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:29:13.397854  147863 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:29:13.398576  147863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/kubeconfig: {Name:mkcb7c4c45c6b828504faaa9fea59b0bb0855286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:29:13.398817  147863 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 11:29:13.398892  147863 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0203 11:29:13.398986  147863 addons.go:69] Setting storage-provisioner=true in profile "test-preload-127579"
	I0203 11:29:13.399006  147863 addons.go:238] Setting addon storage-provisioner=true in "test-preload-127579"
	W0203 11:29:13.399017  147863 addons.go:247] addon storage-provisioner should already be in state true
	I0203 11:29:13.399046  147863 addons.go:69] Setting default-storageclass=true in profile "test-preload-127579"
	I0203 11:29:13.399067  147863 host.go:66] Checking if "test-preload-127579" exists ...
	I0203 11:29:13.399077  147863 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-127579"
	I0203 11:29:13.399055  147863 config.go:182] Loaded profile config "test-preload-127579": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0203 11:29:13.399511  147863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:29:13.399557  147863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:29:13.399576  147863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:29:13.399625  147863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:29:13.400414  147863 out.go:177] * Verifying Kubernetes components...
	I0203 11:29:13.401531  147863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:29:13.415495  147863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I0203 11:29:13.415506  147863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45855
	I0203 11:29:13.415986  147863 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:29:13.416007  147863 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:29:13.416491  147863 main.go:141] libmachine: Using API Version  1
	I0203 11:29:13.416509  147863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:29:13.416492  147863 main.go:141] libmachine: Using API Version  1
	I0203 11:29:13.416538  147863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:29:13.416844  147863 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:29:13.416890  147863 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:29:13.417088  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetState
	I0203 11:29:13.417401  147863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:29:13.417445  147863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:29:13.419286  147863 kapi.go:59] client config for test-preload-127579: &rest.Config{Host:"https://192.168.39.62:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/client.crt", KeyFile:"/home/jenkins/minikube-integration/20354-109432/.minikube/profiles/test-preload-127579/client.key", CAFile:"/home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 11:29:13.419580  147863 addons.go:238] Setting addon default-storageclass=true in "test-preload-127579"
	W0203 11:29:13.419596  147863 addons.go:247] addon default-storageclass should already be in state true
	I0203 11:29:13.419620  147863 host.go:66] Checking if "test-preload-127579" exists ...
	I0203 11:29:13.419912  147863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:29:13.419953  147863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:29:13.432840  147863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0203 11:29:13.433351  147863 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:29:13.433886  147863 main.go:141] libmachine: Using API Version  1
	I0203 11:29:13.433913  147863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:29:13.433986  147863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36497
	I0203 11:29:13.434303  147863 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:29:13.434399  147863 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:29:13.434654  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetState
	I0203 11:29:13.434920  147863 main.go:141] libmachine: Using API Version  1
	I0203 11:29:13.434944  147863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:29:13.435297  147863 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:29:13.435928  147863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:29:13.435979  147863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:29:13.436419  147863 main.go:141] libmachine: (test-preload-127579) Calling .DriverName
	I0203 11:29:13.438178  147863 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:29:13.439224  147863 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:29:13.439241  147863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 11:29:13.439259  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHHostname
	I0203 11:29:13.442067  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:29:13.442504  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:29:13.442530  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:29:13.442674  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHPort
	I0203 11:29:13.442879  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:29:13.443005  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHUsername
	I0203 11:29:13.443123  147863 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/test-preload-127579/id_rsa Username:docker}
	I0203 11:29:13.477472  147863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0203 11:29:13.478070  147863 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:29:13.478730  147863 main.go:141] libmachine: Using API Version  1
	I0203 11:29:13.478754  147863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:29:13.479133  147863 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:29:13.479348  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetState
	I0203 11:29:13.481114  147863 main.go:141] libmachine: (test-preload-127579) Calling .DriverName
	I0203 11:29:13.481352  147863 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 11:29:13.481369  147863 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 11:29:13.481386  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHHostname
	I0203 11:29:13.484347  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:29:13.484772  147863 main.go:141] libmachine: (test-preload-127579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:bf", ip: ""} in network mk-test-preload-127579: {Iface:virbr1 ExpiryTime:2025-02-03 12:28:40 +0000 UTC Type:0 Mac:52:54:00:e2:0b:bf Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-127579 Clientid:01:52:54:00:e2:0b:bf}
	I0203 11:29:13.484804  147863 main.go:141] libmachine: (test-preload-127579) DBG | domain test-preload-127579 has defined IP address 192.168.39.62 and MAC address 52:54:00:e2:0b:bf in network mk-test-preload-127579
	I0203 11:29:13.484957  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHPort
	I0203 11:29:13.485151  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHKeyPath
	I0203 11:29:13.485329  147863 main.go:141] libmachine: (test-preload-127579) Calling .GetSSHUsername
	I0203 11:29:13.485511  147863 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/test-preload-127579/id_rsa Username:docker}
	I0203 11:29:13.583010  147863 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:29:13.601524  147863 node_ready.go:35] waiting up to 6m0s for node "test-preload-127579" to be "Ready" ...
	I0203 11:29:13.668044  147863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:29:13.711245  147863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 11:29:14.647764  147863 main.go:141] libmachine: Making call to close driver server
	I0203 11:29:14.647800  147863 main.go:141] libmachine: (test-preload-127579) Calling .Close
	I0203 11:29:14.647883  147863 main.go:141] libmachine: Making call to close driver server
	I0203 11:29:14.647901  147863 main.go:141] libmachine: (test-preload-127579) Calling .Close
	I0203 11:29:14.648150  147863 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:29:14.648171  147863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:29:14.648180  147863 main.go:141] libmachine: Making call to close driver server
	I0203 11:29:14.648188  147863 main.go:141] libmachine: (test-preload-127579) Calling .Close
	I0203 11:29:14.648281  147863 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:29:14.648293  147863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:29:14.648306  147863 main.go:141] libmachine: Making call to close driver server
	I0203 11:29:14.648315  147863 main.go:141] libmachine: (test-preload-127579) Calling .Close
	I0203 11:29:14.648281  147863 main.go:141] libmachine: (test-preload-127579) DBG | Closing plugin on server side
	I0203 11:29:14.648401  147863 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:29:14.648417  147863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:29:14.648431  147863 main.go:141] libmachine: (test-preload-127579) DBG | Closing plugin on server side
	I0203 11:29:14.648542  147863 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:29:14.648559  147863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:29:14.648599  147863 main.go:141] libmachine: (test-preload-127579) DBG | Closing plugin on server side
	I0203 11:29:14.656072  147863 main.go:141] libmachine: Making call to close driver server
	I0203 11:29:14.656092  147863 main.go:141] libmachine: (test-preload-127579) Calling .Close
	I0203 11:29:14.656322  147863 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:29:14.656338  147863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:29:14.658235  147863 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0203 11:29:14.659326  147863 addons.go:514] duration metric: took 1.260443071s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0203 11:29:15.605753  147863 node_ready.go:53] node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:18.105754  147863 node_ready.go:53] node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:20.106197  147863 node_ready.go:53] node "test-preload-127579" has status "Ready":"False"
	I0203 11:29:21.105536  147863 node_ready.go:49] node "test-preload-127579" has status "Ready":"True"
	I0203 11:29:21.105559  147863 node_ready.go:38] duration metric: took 7.503989449s for node "test-preload-127579" to be "Ready" ...
	I0203 11:29:21.105568  147863 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:29:21.110165  147863 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-dgkpw" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:21.114333  147863 pod_ready.go:93] pod "coredns-6d4b75cb6d-dgkpw" in "kube-system" namespace has status "Ready":"True"
	I0203 11:29:21.114352  147863 pod_ready.go:82] duration metric: took 4.16211ms for pod "coredns-6d4b75cb6d-dgkpw" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:21.114360  147863 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:23.122102  147863 pod_ready.go:103] pod "etcd-test-preload-127579" in "kube-system" namespace has status "Ready":"False"
	I0203 11:29:25.120903  147863 pod_ready.go:93] pod "etcd-test-preload-127579" in "kube-system" namespace has status "Ready":"True"
	I0203 11:29:25.120934  147863 pod_ready.go:82] duration metric: took 4.006566768s for pod "etcd-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:25.120949  147863 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:25.126175  147863 pod_ready.go:93] pod "kube-apiserver-test-preload-127579" in "kube-system" namespace has status "Ready":"True"
	I0203 11:29:25.126201  147863 pod_ready.go:82] duration metric: took 5.243009ms for pod "kube-apiserver-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:25.126213  147863 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:25.130865  147863 pod_ready.go:93] pod "kube-controller-manager-test-preload-127579" in "kube-system" namespace has status "Ready":"True"
	I0203 11:29:25.130885  147863 pod_ready.go:82] duration metric: took 4.665782ms for pod "kube-controller-manager-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:25.130901  147863 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qs2mr" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:25.134503  147863 pod_ready.go:93] pod "kube-proxy-qs2mr" in "kube-system" namespace has status "Ready":"True"
	I0203 11:29:25.134519  147863 pod_ready.go:82] duration metric: took 3.612742ms for pod "kube-proxy-qs2mr" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:25.134527  147863 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:25.640648  147863 pod_ready.go:93] pod "kube-scheduler-test-preload-127579" in "kube-system" namespace has status "Ready":"True"
	I0203 11:29:25.640673  147863 pod_ready.go:82] duration metric: took 506.13953ms for pod "kube-scheduler-test-preload-127579" in "kube-system" namespace to be "Ready" ...
	I0203 11:29:25.640685  147863 pod_ready.go:39] duration metric: took 4.535105974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:29:25.640704  147863 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:29:25.640756  147863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:29:25.656163  147863 api_server.go:72] duration metric: took 12.257308064s to wait for apiserver process to appear ...
	I0203 11:29:25.656195  147863 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:29:25.656218  147863 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0203 11:29:25.661275  147863 api_server.go:279] https://192.168.39.62:8443/healthz returned 200:
	ok
	I0203 11:29:25.662104  147863 api_server.go:141] control plane version: v1.24.4
	I0203 11:29:25.662131  147863 api_server.go:131] duration metric: took 5.927775ms to wait for apiserver health ...
	I0203 11:29:25.662142  147863 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:29:25.722762  147863 system_pods.go:59] 7 kube-system pods found
	I0203 11:29:25.722803  147863 system_pods.go:61] "coredns-6d4b75cb6d-dgkpw" [6107dc65-ef5a-41a9-8608-356d4256f652] Running
	I0203 11:29:25.722811  147863 system_pods.go:61] "etcd-test-preload-127579" [73fb2c98-844b-4b8e-b4be-b36f38803f83] Running
	I0203 11:29:25.722820  147863 system_pods.go:61] "kube-apiserver-test-preload-127579" [406b5efc-7804-47da-9220-8e9e36f90d5b] Running
	I0203 11:29:25.722826  147863 system_pods.go:61] "kube-controller-manager-test-preload-127579" [cc60a058-5403-4687-9955-b48a14534749] Running
	I0203 11:29:25.722830  147863 system_pods.go:61] "kube-proxy-qs2mr" [5618adf0-ccbb-4692-989b-9e6b2b09f35c] Running
	I0203 11:29:25.722839  147863 system_pods.go:61] "kube-scheduler-test-preload-127579" [b82199e4-cc0b-4328-9d28-0501757546e8] Running
	I0203 11:29:25.722844  147863 system_pods.go:61] "storage-provisioner" [f35d43c7-4f21-445b-a727-c573d777a0ca] Running
	I0203 11:29:25.722852  147863 system_pods.go:74] duration metric: took 60.70249ms to wait for pod list to return data ...
	I0203 11:29:25.722863  147863 default_sa.go:34] waiting for default service account to be created ...
	I0203 11:29:25.918286  147863 default_sa.go:45] found service account: "default"
	I0203 11:29:25.918321  147863 default_sa.go:55] duration metric: took 195.450322ms for default service account to be created ...
	I0203 11:29:25.918334  147863 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 11:29:26.121554  147863 system_pods.go:86] 7 kube-system pods found
	I0203 11:29:26.121593  147863 system_pods.go:89] "coredns-6d4b75cb6d-dgkpw" [6107dc65-ef5a-41a9-8608-356d4256f652] Running
	I0203 11:29:26.121601  147863 system_pods.go:89] "etcd-test-preload-127579" [73fb2c98-844b-4b8e-b4be-b36f38803f83] Running
	I0203 11:29:26.121615  147863 system_pods.go:89] "kube-apiserver-test-preload-127579" [406b5efc-7804-47da-9220-8e9e36f90d5b] Running
	I0203 11:29:26.121621  147863 system_pods.go:89] "kube-controller-manager-test-preload-127579" [cc60a058-5403-4687-9955-b48a14534749] Running
	I0203 11:29:26.121626  147863 system_pods.go:89] "kube-proxy-qs2mr" [5618adf0-ccbb-4692-989b-9e6b2b09f35c] Running
	I0203 11:29:26.121632  147863 system_pods.go:89] "kube-scheduler-test-preload-127579" [b82199e4-cc0b-4328-9d28-0501757546e8] Running
	I0203 11:29:26.121636  147863 system_pods.go:89] "storage-provisioner" [f35d43c7-4f21-445b-a727-c573d777a0ca] Running
	I0203 11:29:26.121645  147863 system_pods.go:126] duration metric: took 203.303287ms to wait for k8s-apps to be running ...
	I0203 11:29:26.121655  147863 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 11:29:26.121713  147863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:29:26.147731  147863 system_svc.go:56] duration metric: took 26.065282ms WaitForService to wait for kubelet
	I0203 11:29:26.147778  147863 kubeadm.go:582] duration metric: took 12.748929061s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:29:26.147800  147863 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:29:26.318910  147863 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:29:26.318942  147863 node_conditions.go:123] node cpu capacity is 2
	I0203 11:29:26.318955  147863 node_conditions.go:105] duration metric: took 171.149139ms to run NodePressure ...
	I0203 11:29:26.318972  147863 start.go:241] waiting for startup goroutines ...
	I0203 11:29:26.318984  147863 start.go:246] waiting for cluster config update ...
	I0203 11:29:26.318998  147863 start.go:255] writing updated cluster config ...
	I0203 11:29:26.319335  147863 ssh_runner.go:195] Run: rm -f paused
	I0203 11:29:26.367281  147863 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0203 11:29:26.369196  147863 out.go:201] 
	W0203 11:29:26.370623  147863 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0203 11:29:26.371954  147863 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0203 11:29:26.373329  147863 out.go:177] * Done! kubectl is now configured to use "test-preload-127579" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.238812926Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738582167238790663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1e1b954-b1b5-49ff-828c-ce80fca5542d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.239337383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1e9e55e-4c31-48dc-a8c5-e11a0c6fb869 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.239396242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1e9e55e-4c31-48dc-a8c5-e11a0c6fb869 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.239590632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4447d0da1177ab353e5ad0c9c9439513a4c49ca5d2dbafce56f33a67a50bda55,PodSandboxId:5ea799f46ff603e0dd29b80eb87fb8279828dbb56c19650aa31ba3dc413ccfc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1738582159130817115,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dgkpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6107dc65-ef5a-41a9-8608-356d4256f652,},Annotations:map[string]string{io.kubernetes.container.hash: 35a4202f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adfb6dabd7b0d876494d1949c5cdba35ed736565526c45b070491cff9f6a9110,PodSandboxId:e48cdd84d5bc31221f889d8ce2fc507af6216133f98cc99b2470c16f9b7e7c65,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738582153179021394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: f35d43c7-4f21-445b-a727-c573d777a0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 9cf41d03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4728c56f3c8064cb65dc9660aba4049014996160c06103c952e75a77a07b1ea,PodSandboxId:820231ed4d7ca225e2885647619d81842085061529e80ebfc0764458bbbccfb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1738582152077750653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qs2mr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
18adf0-ccbb-4692-989b-9e6b2b09f35c,},Annotations:map[string]string{io.kubernetes.container.hash: fde0337,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbb603ae564e245ab202b95250e509ef338a1a9ebfc821204e59178a106207a,PodSandboxId:e48cdd84d5bc31221f889d8ce2fc507af6216133f98cc99b2470c16f9b7e7c65,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738582152065680141,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f35d43c7-4f21-44
5b-a727-c573d777a0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 9cf41d03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655fe5b9075ff8819b367dbe8d9252c80fc6d50c0be52a82777ba60c636f3ecb,PodSandboxId:e899a7ebf632a79c2a2ae1296e8b95588ca524c22681d830a51e413f0dd9a20b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1738582146805545786,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 19f6d97968a3939eb62695a2e7065d5d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd34ab3cc62ba253ce45e76951ab143d454561c66eaeac2ef823e7e921e58fd,PodSandboxId:76e1d15ea5305030c70549fea7f101555343f684277e7c6b658480881f0b88e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1738582146737413542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06d384358459da9d39f137944f437ea,},An
notations:map[string]string{io.kubernetes.container.hash: 57b77d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa9e78764e689bac437771b480da0e240bfb4480051b65110c8b755228ef19,PodSandboxId:30295793a749e2fa62b9fcb8c119ea6d0e4869a5a8489fa4418aebf717a65f48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1738582146714838118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e65a598a9be69b0f82f242f9bedeff34,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7885c289c2eb0bf3a1fa14960b234e72e42d9572b6748cc4ec18f3dcf88282,PodSandboxId:afa160e221c5161532ee05c95dde743d85849f289de78f1eb00f7605fc38878d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1738582146672378975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d288c94998e2893e095eb9fc45b09,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3bf0a39f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1e9e55e-4c31-48dc-a8c5-e11a0c6fb869 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.275167155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1a12894-8127-4c21-af5e-883021f8404b name=/runtime.v1.RuntimeService/Version
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.275245226Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1a12894-8127-4c21-af5e-883021f8404b name=/runtime.v1.RuntimeService/Version
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.276416212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ebbb4f3-ae29-4875-9185-93196d9acd2b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.276949426Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738582167276894471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ebbb4f3-ae29-4875-9185-93196d9acd2b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.277599802Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=143da407-c2fa-41ef-89fe-bb327f578308 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.277692803Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=143da407-c2fa-41ef-89fe-bb327f578308 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.277877343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4447d0da1177ab353e5ad0c9c9439513a4c49ca5d2dbafce56f33a67a50bda55,PodSandboxId:5ea799f46ff603e0dd29b80eb87fb8279828dbb56c19650aa31ba3dc413ccfc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1738582159130817115,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dgkpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6107dc65-ef5a-41a9-8608-356d4256f652,},Annotations:map[string]string{io.kubernetes.container.hash: 35a4202f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adfb6dabd7b0d876494d1949c5cdba35ed736565526c45b070491cff9f6a9110,PodSandboxId:e48cdd84d5bc31221f889d8ce2fc507af6216133f98cc99b2470c16f9b7e7c65,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738582153179021394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: f35d43c7-4f21-445b-a727-c573d777a0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 9cf41d03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4728c56f3c8064cb65dc9660aba4049014996160c06103c952e75a77a07b1ea,PodSandboxId:820231ed4d7ca225e2885647619d81842085061529e80ebfc0764458bbbccfb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1738582152077750653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qs2mr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
18adf0-ccbb-4692-989b-9e6b2b09f35c,},Annotations:map[string]string{io.kubernetes.container.hash: fde0337,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbb603ae564e245ab202b95250e509ef338a1a9ebfc821204e59178a106207a,PodSandboxId:e48cdd84d5bc31221f889d8ce2fc507af6216133f98cc99b2470c16f9b7e7c65,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738582152065680141,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f35d43c7-4f21-44
5b-a727-c573d777a0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 9cf41d03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655fe5b9075ff8819b367dbe8d9252c80fc6d50c0be52a82777ba60c636f3ecb,PodSandboxId:e899a7ebf632a79c2a2ae1296e8b95588ca524c22681d830a51e413f0dd9a20b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1738582146805545786,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 19f6d97968a3939eb62695a2e7065d5d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd34ab3cc62ba253ce45e76951ab143d454561c66eaeac2ef823e7e921e58fd,PodSandboxId:76e1d15ea5305030c70549fea7f101555343f684277e7c6b658480881f0b88e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1738582146737413542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06d384358459da9d39f137944f437ea,},An
notations:map[string]string{io.kubernetes.container.hash: 57b77d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa9e78764e689bac437771b480da0e240bfb4480051b65110c8b755228ef19,PodSandboxId:30295793a749e2fa62b9fcb8c119ea6d0e4869a5a8489fa4418aebf717a65f48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1738582146714838118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e65a598a9be69b0f82f242f9bedeff34,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7885c289c2eb0bf3a1fa14960b234e72e42d9572b6748cc4ec18f3dcf88282,PodSandboxId:afa160e221c5161532ee05c95dde743d85849f289de78f1eb00f7605fc38878d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1738582146672378975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d288c94998e2893e095eb9fc45b09,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3bf0a39f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=143da407-c2fa-41ef-89fe-bb327f578308 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.313815220Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce0814c9-9e02-4535-97a7-28ab8d09c0da name=/runtime.v1.RuntimeService/Version
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.313885200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce0814c9-9e02-4535-97a7-28ab8d09c0da name=/runtime.v1.RuntimeService/Version
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.314788154Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5550bdce-ef93-4ad2-a4c8-865a463d249e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.315249136Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738582167315228300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5550bdce-ef93-4ad2-a4c8-865a463d249e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.315791274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=427863bf-7c1a-416e-ba79-aeda004e8ca3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.315839995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=427863bf-7c1a-416e-ba79-aeda004e8ca3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.316068312Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4447d0da1177ab353e5ad0c9c9439513a4c49ca5d2dbafce56f33a67a50bda55,PodSandboxId:5ea799f46ff603e0dd29b80eb87fb8279828dbb56c19650aa31ba3dc413ccfc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1738582159130817115,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dgkpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6107dc65-ef5a-41a9-8608-356d4256f652,},Annotations:map[string]string{io.kubernetes.container.hash: 35a4202f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adfb6dabd7b0d876494d1949c5cdba35ed736565526c45b070491cff9f6a9110,PodSandboxId:e48cdd84d5bc31221f889d8ce2fc507af6216133f98cc99b2470c16f9b7e7c65,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738582153179021394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: f35d43c7-4f21-445b-a727-c573d777a0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 9cf41d03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4728c56f3c8064cb65dc9660aba4049014996160c06103c952e75a77a07b1ea,PodSandboxId:820231ed4d7ca225e2885647619d81842085061529e80ebfc0764458bbbccfb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1738582152077750653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qs2mr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
18adf0-ccbb-4692-989b-9e6b2b09f35c,},Annotations:map[string]string{io.kubernetes.container.hash: fde0337,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbb603ae564e245ab202b95250e509ef338a1a9ebfc821204e59178a106207a,PodSandboxId:e48cdd84d5bc31221f889d8ce2fc507af6216133f98cc99b2470c16f9b7e7c65,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738582152065680141,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f35d43c7-4f21-44
5b-a727-c573d777a0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 9cf41d03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655fe5b9075ff8819b367dbe8d9252c80fc6d50c0be52a82777ba60c636f3ecb,PodSandboxId:e899a7ebf632a79c2a2ae1296e8b95588ca524c22681d830a51e413f0dd9a20b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1738582146805545786,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 19f6d97968a3939eb62695a2e7065d5d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd34ab3cc62ba253ce45e76951ab143d454561c66eaeac2ef823e7e921e58fd,PodSandboxId:76e1d15ea5305030c70549fea7f101555343f684277e7c6b658480881f0b88e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1738582146737413542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06d384358459da9d39f137944f437ea,},An
notations:map[string]string{io.kubernetes.container.hash: 57b77d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa9e78764e689bac437771b480da0e240bfb4480051b65110c8b755228ef19,PodSandboxId:30295793a749e2fa62b9fcb8c119ea6d0e4869a5a8489fa4418aebf717a65f48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1738582146714838118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e65a598a9be69b0f82f242f9bedeff34,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7885c289c2eb0bf3a1fa14960b234e72e42d9572b6748cc4ec18f3dcf88282,PodSandboxId:afa160e221c5161532ee05c95dde743d85849f289de78f1eb00f7605fc38878d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1738582146672378975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d288c94998e2893e095eb9fc45b09,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3bf0a39f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=427863bf-7c1a-416e-ba79-aeda004e8ca3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.347412201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f75c57c3-2245-4322-a35f-365894306cf9 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.347481460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f75c57c3-2245-4322-a35f-365894306cf9 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.348899382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7822bfe-8d12-40cd-9ab3-411ccf274594 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.349359635Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738582167349333691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7822bfe-8d12-40cd-9ab3-411ccf274594 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.349941599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43f730d8-d59f-4404-8e3a-d127d71db08c name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.349990301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43f730d8-d59f-4404-8e3a-d127d71db08c name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:29:27 test-preload-127579 crio[667]: time="2025-02-03 11:29:27.350488393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4447d0da1177ab353e5ad0c9c9439513a4c49ca5d2dbafce56f33a67a50bda55,PodSandboxId:5ea799f46ff603e0dd29b80eb87fb8279828dbb56c19650aa31ba3dc413ccfc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1738582159130817115,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dgkpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6107dc65-ef5a-41a9-8608-356d4256f652,},Annotations:map[string]string{io.kubernetes.container.hash: 35a4202f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adfb6dabd7b0d876494d1949c5cdba35ed736565526c45b070491cff9f6a9110,PodSandboxId:e48cdd84d5bc31221f889d8ce2fc507af6216133f98cc99b2470c16f9b7e7c65,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738582153179021394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: f35d43c7-4f21-445b-a727-c573d777a0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 9cf41d03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4728c56f3c8064cb65dc9660aba4049014996160c06103c952e75a77a07b1ea,PodSandboxId:820231ed4d7ca225e2885647619d81842085061529e80ebfc0764458bbbccfb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1738582152077750653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qs2mr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
18adf0-ccbb-4692-989b-9e6b2b09f35c,},Annotations:map[string]string{io.kubernetes.container.hash: fde0337,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbb603ae564e245ab202b95250e509ef338a1a9ebfc821204e59178a106207a,PodSandboxId:e48cdd84d5bc31221f889d8ce2fc507af6216133f98cc99b2470c16f9b7e7c65,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738582152065680141,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f35d43c7-4f21-44
5b-a727-c573d777a0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 9cf41d03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655fe5b9075ff8819b367dbe8d9252c80fc6d50c0be52a82777ba60c636f3ecb,PodSandboxId:e899a7ebf632a79c2a2ae1296e8b95588ca524c22681d830a51e413f0dd9a20b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1738582146805545786,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 19f6d97968a3939eb62695a2e7065d5d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd34ab3cc62ba253ce45e76951ab143d454561c66eaeac2ef823e7e921e58fd,PodSandboxId:76e1d15ea5305030c70549fea7f101555343f684277e7c6b658480881f0b88e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1738582146737413542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06d384358459da9d39f137944f437ea,},An
notations:map[string]string{io.kubernetes.container.hash: 57b77d68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa9e78764e689bac437771b480da0e240bfb4480051b65110c8b755228ef19,PodSandboxId:30295793a749e2fa62b9fcb8c119ea6d0e4869a5a8489fa4418aebf717a65f48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1738582146714838118,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e65a598a9be69b0f82f242f9bedeff34,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7885c289c2eb0bf3a1fa14960b234e72e42d9572b6748cc4ec18f3dcf88282,PodSandboxId:afa160e221c5161532ee05c95dde743d85849f289de78f1eb00f7605fc38878d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1738582146672378975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-127579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d288c94998e2893e095eb9fc45b09,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3bf0a39f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43f730d8-d59f-4404-8e3a-d127d71db08c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4447d0da1177a       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   5ea799f46ff60       coredns-6d4b75cb6d-dgkpw
	adfb6dabd7b0d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       2                   e48cdd84d5bc3       storage-provisioner
	a4728c56f3c80       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   820231ed4d7ca       kube-proxy-qs2mr
	3bbb603ae564e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Exited              storage-provisioner       1                   e48cdd84d5bc3       storage-provisioner
	655fe5b9075ff       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   e899a7ebf632a       kube-controller-manager-test-preload-127579
	8bd34ab3cc62b       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   76e1d15ea5305       etcd-test-preload-127579
	dcfa9e78764e6       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   30295793a749e       kube-scheduler-test-preload-127579
	2c7885c289c2e       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   afa160e221c51       kube-apiserver-test-preload-127579
	
	
	==> coredns [4447d0da1177ab353e5ad0c9c9439513a4c49ca5d2dbafce56f33a67a50bda55] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:44868 - 36426 "HINFO IN 3934825351509992131.8013955060288086867. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016794951s
	
	
	==> describe nodes <==
	Name:               test-preload-127579
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-127579
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=test-preload-127579
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_03T11_27_44_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 11:27:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-127579
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 11:29:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 11:29:20 +0000   Mon, 03 Feb 2025 11:27:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 11:29:20 +0000   Mon, 03 Feb 2025 11:27:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 11:29:20 +0000   Mon, 03 Feb 2025 11:27:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 11:29:20 +0000   Mon, 03 Feb 2025 11:29:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    test-preload-127579
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3b0586f5a5f4a0db02cb8de297383fc
	  System UUID:                b3b0586f-5a5f-4a0d-b02c-b8de297383fc
	  Boot ID:                    dc38fcf5-3a6a-49b9-b3a0-07c33e5909ce
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dgkpw                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     91s
	  kube-system                 etcd-test-preload-127579                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         103s
	  kube-system                 kube-apiserver-test-preload-127579             250m (12%)    0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-controller-manager-test-preload-127579    200m (10%)    0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-proxy-qs2mr                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-test-preload-127579             100m (5%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 15s                  kube-proxy       
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  111s (x4 over 111s)  kubelet          Node test-preload-127579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x4 over 111s)  kubelet          Node test-preload-127579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x4 over 111s)  kubelet          Node test-preload-127579 status is now: NodeHasSufficientPID
	  Normal  Starting                 104s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  103s                 kubelet          Node test-preload-127579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s                 kubelet          Node test-preload-127579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s                 kubelet          Node test-preload-127579 status is now: NodeHasSufficientPID
	  Normal  NodeReady                93s                  kubelet          Node test-preload-127579 status is now: NodeReady
	  Normal  RegisteredNode           91s                  node-controller  Node test-preload-127579 event: Registered Node test-preload-127579 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node test-preload-127579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node test-preload-127579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node test-preload-127579 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-127579 event: Registered Node test-preload-127579 in Controller
	
	
	==> dmesg <==
	[Feb 3 11:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056302] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047059] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.849192] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.009729] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.552892] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.635954] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.060214] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060935] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.159721] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.149261] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.262601] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[Feb 3 11:29] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.056277] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.146967] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +4.555446] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.057108] systemd-fstab-generator[1819]: Ignoring "noauto" option for root device
	[  +5.509627] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [8bd34ab3cc62ba253ce45e76951ab143d454561c66eaeac2ef823e7e921e58fd] <==
	{"level":"info","ts":"2025-02-03T11:29:07.023Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"4cff10f3f970b356","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-02-03T11:29:07.026Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-03T11:29:07.026Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4cff10f3f970b356","initial-advertise-peer-urls":["https://192.168.39.62:2380"],"listen-peer-urls":["https://192.168.39.62:2380"],"advertise-client-urls":["https://192.168.39.62:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.62:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-03T11:29:07.026Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-03T11:29:07.026Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-02-03T11:29:07.026Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.62:2380"}
	{"level":"info","ts":"2025-02-03T11:29:07.026Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.62:2380"}
	{"level":"info","ts":"2025-02-03T11:29:07.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 switched to configuration voters=(5548171905991750486)"}
	{"level":"info","ts":"2025-02-03T11:29:07.027Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cebe0b560c7f0a8","local-member-id":"4cff10f3f970b356","added-peer-id":"4cff10f3f970b356","added-peer-peer-urls":["https://192.168.39.62:2380"]}
	{"level":"info","ts":"2025-02-03T11:29:07.027Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cebe0b560c7f0a8","local-member-id":"4cff10f3f970b356","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-03T11:29:07.027Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-03T11:29:08.401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-03T11:29:08.401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-03T11:29:08.401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 received MsgPreVoteResp from 4cff10f3f970b356 at term 2"}
	{"level":"info","ts":"2025-02-03T11:29:08.401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 became candidate at term 3"}
	{"level":"info","ts":"2025-02-03T11:29:08.401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 received MsgVoteResp from 4cff10f3f970b356 at term 3"}
	{"level":"info","ts":"2025-02-03T11:29:08.401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 became leader at term 3"}
	{"level":"info","ts":"2025-02-03T11:29:08.401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4cff10f3f970b356 elected leader 4cff10f3f970b356 at term 3"}
	{"level":"info","ts":"2025-02-03T11:29:08.406Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"4cff10f3f970b356","local-member-attributes":"{Name:test-preload-127579 ClientURLs:[https://192.168.39.62:2379]}","request-path":"/0/members/4cff10f3f970b356/attributes","cluster-id":"cebe0b560c7f0a8","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-03T11:29:08.407Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-03T11:29:08.407Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-03T11:29:08.408Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.62:2379"}
	{"level":"info","ts":"2025-02-03T11:29:08.409Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-03T11:29:08.410Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-03T11:29:08.410Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:29:27 up 0 min,  0 users,  load average: 0.69, 0.21, 0.07
	Linux test-preload-127579 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2c7885c289c2eb0bf3a1fa14960b234e72e42d9572b6748cc4ec18f3dcf88282] <==
	I0203 11:29:10.773334       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0203 11:29:10.773365       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0203 11:29:10.774845       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 11:29:10.793130       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 11:29:10.800575       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0203 11:29:10.800601       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0203 11:29:10.800684       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0203 11:29:10.807920       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0203 11:29:10.825871       1 cache.go:39] Caches are synced for autoregister controller
	I0203 11:29:10.826077       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0203 11:29:10.826521       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0203 11:29:10.826564       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0203 11:29:10.826858       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0203 11:29:10.829590       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0203 11:29:10.882342       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0203 11:29:11.405581       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0203 11:29:11.730243       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0203 11:29:12.370112       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0203 11:29:12.383728       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0203 11:29:12.442367       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0203 11:29:12.464728       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 11:29:12.470739       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0203 11:29:12.487160       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0203 11:29:23.501800       1 controller.go:611] quota admission added evaluator for: endpoints
	I0203 11:29:23.504943       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [655fe5b9075ff8819b367dbe8d9252c80fc6d50c0be52a82777ba60c636f3ecb] <==
	I0203 11:29:23.505837       1 shared_informer.go:262] Caches are synced for PVC protection
	I0203 11:29:23.507586       1 shared_informer.go:262] Caches are synced for attach detach
	I0203 11:29:23.509425       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 11:29:23.509908       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 11:29:23.510237       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 11:29:23.510289       1 shared_informer.go:262] Caches are synced for job
	I0203 11:29:23.510338       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 11:29:23.512676       1 shared_informer.go:262] Caches are synced for PV protection
	I0203 11:29:23.512725       1 shared_informer.go:262] Caches are synced for deployment
	I0203 11:29:23.515685       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0203 11:29:23.521022       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0203 11:29:23.521542       1 shared_informer.go:262] Caches are synced for HPA
	I0203 11:29:23.590850       1 shared_informer.go:262] Caches are synced for service account
	I0203 11:29:23.608593       1 shared_informer.go:262] Caches are synced for stateful set
	I0203 11:29:23.612396       1 shared_informer.go:262] Caches are synced for namespace
	I0203 11:29:23.614818       1 shared_informer.go:262] Caches are synced for daemon sets
	I0203 11:29:23.707581       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0203 11:29:23.718277       1 shared_informer.go:262] Caches are synced for cronjob
	I0203 11:29:23.718383       1 shared_informer.go:262] Caches are synced for disruption
	I0203 11:29:23.718419       1 disruption.go:371] Sending events to api server.
	I0203 11:29:23.719704       1 shared_informer.go:262] Caches are synced for resource quota
	I0203 11:29:23.755532       1 shared_informer.go:262] Caches are synced for resource quota
	I0203 11:29:24.169247       1 shared_informer.go:262] Caches are synced for garbage collector
	I0203 11:29:24.172741       1 shared_informer.go:262] Caches are synced for garbage collector
	I0203 11:29:24.172772       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [a4728c56f3c8064cb65dc9660aba4049014996160c06103c952e75a77a07b1ea] <==
	I0203 11:29:12.398090       1 node.go:163] Successfully retrieved node IP: 192.168.39.62
	I0203 11:29:12.398200       1 server_others.go:138] "Detected node IP" address="192.168.39.62"
	I0203 11:29:12.398269       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0203 11:29:12.472172       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0203 11:29:12.472275       1 server_others.go:206] "Using iptables Proxier"
	I0203 11:29:12.472893       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0203 11:29:12.475714       1 server.go:661] "Version info" version="v1.24.4"
	I0203 11:29:12.475764       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 11:29:12.479330       1 config.go:317] "Starting service config controller"
	I0203 11:29:12.479658       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0203 11:29:12.479716       1 config.go:226] "Starting endpoint slice config controller"
	I0203 11:29:12.479736       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0203 11:29:12.480924       1 config.go:444] "Starting node config controller"
	I0203 11:29:12.480949       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0203 11:29:12.580146       1 shared_informer.go:262] Caches are synced for service config
	I0203 11:29:12.580269       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0203 11:29:12.581062       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [dcfa9e78764e689bac437771b480da0e240bfb4480051b65110c8b755228ef19] <==
	I0203 11:29:07.268465       1 serving.go:348] Generated self-signed cert in-memory
	W0203 11:29:10.783685       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0203 11:29:10.783754       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0203 11:29:10.783769       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0203 11:29:10.783778       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 11:29:10.814496       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0203 11:29:10.814549       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 11:29:10.817319       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0203 11:29:10.817503       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 11:29:10.817540       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 11:29:10.821253       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0203 11:29:10.917846       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: I0203 11:29:11.023542    1128 topology_manager.go:200] "Topology Admit Handler"
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: I0203 11:29:11.023574    1128 topology_manager.go:200] "Topology Admit Handler"
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: E0203 11:29:11.025947    1128 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-dgkpw" podUID=6107dc65-ef5a-41a9-8608-356d4256f652
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: I0203 11:29:11.088180    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6107dc65-ef5a-41a9-8608-356d4256f652-config-volume\") pod \"coredns-6d4b75cb6d-dgkpw\" (UID: \"6107dc65-ef5a-41a9-8608-356d4256f652\") " pod="kube-system/coredns-6d4b75cb6d-dgkpw"
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: I0203 11:29:11.088968    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfrcd\" (UniqueName: \"kubernetes.io/projected/6107dc65-ef5a-41a9-8608-356d4256f652-kube-api-access-gfrcd\") pod \"coredns-6d4b75cb6d-dgkpw\" (UID: \"6107dc65-ef5a-41a9-8608-356d4256f652\") " pod="kube-system/coredns-6d4b75cb6d-dgkpw"
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: I0203 11:29:11.089145    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f35d43c7-4f21-445b-a727-c573d777a0ca-tmp\") pod \"storage-provisioner\" (UID: \"f35d43c7-4f21-445b-a727-c573d777a0ca\") " pod="kube-system/storage-provisioner"
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: I0203 11:29:11.089284    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz4xs\" (UniqueName: \"kubernetes.io/projected/f35d43c7-4f21-445b-a727-c573d777a0ca-kube-api-access-qz4xs\") pod \"storage-provisioner\" (UID: \"f35d43c7-4f21-445b-a727-c573d777a0ca\") " pod="kube-system/storage-provisioner"
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: I0203 11:29:11.089402    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5618adf0-ccbb-4692-989b-9e6b2b09f35c-kube-proxy\") pod \"kube-proxy-qs2mr\" (UID: \"5618adf0-ccbb-4692-989b-9e6b2b09f35c\") " pod="kube-system/kube-proxy-qs2mr"
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: I0203 11:29:11.089519    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5618adf0-ccbb-4692-989b-9e6b2b09f35c-lib-modules\") pod \"kube-proxy-qs2mr\" (UID: \"5618adf0-ccbb-4692-989b-9e6b2b09f35c\") " pod="kube-system/kube-proxy-qs2mr"
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: E0203 11:29:11.088827    1128 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: I0203 11:29:11.089823    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgn47\" (UniqueName: \"kubernetes.io/projected/5618adf0-ccbb-4692-989b-9e6b2b09f35c-kube-api-access-sgn47\") pod \"kube-proxy-qs2mr\" (UID: \"5618adf0-ccbb-4692-989b-9e6b2b09f35c\") " pod="kube-system/kube-proxy-qs2mr"
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: I0203 11:29:11.089951    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5618adf0-ccbb-4692-989b-9e6b2b09f35c-xtables-lock\") pod \"kube-proxy-qs2mr\" (UID: \"5618adf0-ccbb-4692-989b-9e6b2b09f35c\") " pod="kube-system/kube-proxy-qs2mr"
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: I0203 11:29:11.090059    1128 reconciler.go:159] "Reconciler: start to sync state"
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: E0203 11:29:11.193177    1128 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: E0203 11:29:11.193277    1128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6107dc65-ef5a-41a9-8608-356d4256f652-config-volume podName:6107dc65-ef5a-41a9-8608-356d4256f652 nodeName:}" failed. No retries permitted until 2025-02-03 11:29:11.693242701 +0000 UTC m=+5.788977684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6107dc65-ef5a-41a9-8608-356d4256f652-config-volume") pod "coredns-6d4b75cb6d-dgkpw" (UID: "6107dc65-ef5a-41a9-8608-356d4256f652") : object "kube-system"/"coredns" not registered
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: E0203 11:29:11.699087    1128 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 03 11:29:11 test-preload-127579 kubelet[1128]: E0203 11:29:11.699186    1128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6107dc65-ef5a-41a9-8608-356d4256f652-config-volume podName:6107dc65-ef5a-41a9-8608-356d4256f652 nodeName:}" failed. No retries permitted until 2025-02-03 11:29:12.699171114 +0000 UTC m=+6.794906079 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6107dc65-ef5a-41a9-8608-356d4256f652-config-volume") pod "coredns-6d4b75cb6d-dgkpw" (UID: "6107dc65-ef5a-41a9-8608-356d4256f652") : object "kube-system"/"coredns" not registered
	Feb 03 11:29:12 test-preload-127579 kubelet[1128]: I0203 11:29:12.125383    1128 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a9dd492e-4aec-4a87-9300-8d9ef9d08bef path="/var/lib/kubelet/pods/a9dd492e-4aec-4a87-9300-8d9ef9d08bef/volumes"
	Feb 03 11:29:12 test-preload-127579 kubelet[1128]: E0203 11:29:12.706169    1128 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 03 11:29:12 test-preload-127579 kubelet[1128]: E0203 11:29:12.706281    1128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6107dc65-ef5a-41a9-8608-356d4256f652-config-volume podName:6107dc65-ef5a-41a9-8608-356d4256f652 nodeName:}" failed. No retries permitted until 2025-02-03 11:29:14.706264578 +0000 UTC m=+8.801999544 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6107dc65-ef5a-41a9-8608-356d4256f652-config-volume") pod "coredns-6d4b75cb6d-dgkpw" (UID: "6107dc65-ef5a-41a9-8608-356d4256f652") : object "kube-system"/"coredns" not registered
	Feb 03 11:29:13 test-preload-127579 kubelet[1128]: E0203 11:29:13.109689    1128 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-dgkpw" podUID=6107dc65-ef5a-41a9-8608-356d4256f652
	Feb 03 11:29:13 test-preload-127579 kubelet[1128]: I0203 11:29:13.163691    1128 scope.go:110] "RemoveContainer" containerID="3bbb603ae564e245ab202b95250e509ef338a1a9ebfc821204e59178a106207a"
	Feb 03 11:29:14 test-preload-127579 kubelet[1128]: E0203 11:29:14.729375    1128 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 03 11:29:14 test-preload-127579 kubelet[1128]: E0203 11:29:14.730295    1128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6107dc65-ef5a-41a9-8608-356d4256f652-config-volume podName:6107dc65-ef5a-41a9-8608-356d4256f652 nodeName:}" failed. No retries permitted until 2025-02-03 11:29:18.730266978 +0000 UTC m=+12.826001947 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6107dc65-ef5a-41a9-8608-356d4256f652-config-volume") pod "coredns-6d4b75cb6d-dgkpw" (UID: "6107dc65-ef5a-41a9-8608-356d4256f652") : object "kube-system"/"coredns" not registered
	Feb 03 11:29:15 test-preload-127579 kubelet[1128]: E0203 11:29:15.109664    1128 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-dgkpw" podUID=6107dc65-ef5a-41a9-8608-356d4256f652
	
	
	==> storage-provisioner [3bbb603ae564e245ab202b95250e509ef338a1a9ebfc821204e59178a106207a] <==
	I0203 11:29:12.199317       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0203 11:29:12.206216       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [adfb6dabd7b0d876494d1949c5cdba35ed736565526c45b070491cff9f6a9110] <==
	I0203 11:29:13.258940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0203 11:29:13.270781       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0203 11:29:13.270954       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-127579 -n test-preload-127579
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-127579 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-127579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-127579
--- FAIL: TestPreload (176.53s)

                                                
                                    
x
+
TestKubernetesUpgrade (402.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-700839 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-700839 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m30.700870816s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-700839] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-700839" primary control-plane node in "kubernetes-upgrade-700839" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:35:06.316550  154759 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:35:06.316726  154759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:35:06.316738  154759 out.go:358] Setting ErrFile to fd 2...
	I0203 11:35:06.316746  154759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:35:06.317194  154759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:35:06.318071  154759 out.go:352] Setting JSON to false
	I0203 11:35:06.319591  154759 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8248,"bootTime":1738574258,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 11:35:06.319706  154759 start.go:139] virtualization: kvm guest
	I0203 11:35:06.322955  154759 out.go:177] * [kubernetes-upgrade-700839] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 11:35:06.324931  154759 notify.go:220] Checking for updates...
	I0203 11:35:06.324948  154759 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:35:06.326779  154759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:35:06.328970  154759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:35:06.331137  154759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:35:06.334926  154759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 11:35:06.336920  154759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:35:06.339401  154759 config.go:182] Loaded profile config "NoKubernetes-178849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0203 11:35:06.339546  154759 config.go:182] Loaded profile config "cert-expiration-149645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:35:06.339688  154759 config.go:182] Loaded profile config "running-upgrade-191474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0203 11:35:06.339797  154759 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:35:06.389301  154759 out.go:177] * Using the kvm2 driver based on user configuration
	I0203 11:35:06.391357  154759 start.go:297] selected driver: kvm2
	I0203 11:35:06.391394  154759 start.go:901] validating driver "kvm2" against <nil>
	I0203 11:35:06.391410  154759 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:35:06.392820  154759 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:35:06.392974  154759 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 11:35:06.417391  154759 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 11:35:06.417467  154759 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 11:35:06.417853  154759 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0203 11:35:06.417901  154759 cni.go:84] Creating CNI manager for ""
	I0203 11:35:06.417963  154759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:35:06.417977  154759 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0203 11:35:06.418074  154759 start.go:340] cluster config:
	{Name:kubernetes-upgrade-700839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-700839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:35:06.418243  154759 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:35:06.420625  154759 out.go:177] * Starting "kubernetes-upgrade-700839" primary control-plane node in "kubernetes-upgrade-700839" cluster
	I0203 11:35:06.422325  154759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 11:35:06.422380  154759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0203 11:35:06.422392  154759 cache.go:56] Caching tarball of preloaded images
	I0203 11:35:06.422555  154759 preload.go:172] Found /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0203 11:35:06.422594  154759 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0203 11:35:06.422785  154759 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/config.json ...
	I0203 11:35:06.422815  154759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/config.json: {Name:mk3e987e1ca5d49b9895fb746b7b699dfeb2a155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:35:06.422974  154759 start.go:360] acquireMachinesLock for kubernetes-upgrade-700839: {Name:mk4d774b88f87fe0539ca3e30dd98aae8a4d5437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:35:06.423003  154759 start.go:364] duration metric: took 18.069µs to acquireMachinesLock for "kubernetes-upgrade-700839"
	I0203 11:35:06.423018  154759 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-700839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-700839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 11:35:06.423085  154759 start.go:125] createHost starting for "" (driver="kvm2")
	I0203 11:35:06.425054  154759 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0203 11:35:06.425289  154759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:35:06.425355  154759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:35:06.443049  154759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0203 11:35:06.443550  154759 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:35:06.444189  154759 main.go:141] libmachine: Using API Version  1
	I0203 11:35:06.444214  154759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:35:06.444636  154759 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:35:06.444848  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetMachineName
	I0203 11:35:06.445053  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .DriverName
	I0203 11:35:06.445248  154759 start.go:159] libmachine.API.Create for "kubernetes-upgrade-700839" (driver="kvm2")
	I0203 11:35:06.445294  154759 client.go:168] LocalClient.Create starting
	I0203 11:35:06.445331  154759 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem
	I0203 11:35:06.445370  154759 main.go:141] libmachine: Decoding PEM data...
	I0203 11:35:06.445389  154759 main.go:141] libmachine: Parsing certificate...
	I0203 11:35:06.445465  154759 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem
	I0203 11:35:06.445492  154759 main.go:141] libmachine: Decoding PEM data...
	I0203 11:35:06.445508  154759 main.go:141] libmachine: Parsing certificate...
	I0203 11:35:06.445533  154759 main.go:141] libmachine: Running pre-create checks...
	I0203 11:35:06.445547  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .PreCreateCheck
	I0203 11:35:06.445897  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetConfigRaw
	I0203 11:35:06.446488  154759 main.go:141] libmachine: Creating machine...
	I0203 11:35:06.446511  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .Create
	I0203 11:35:06.446686  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) creating KVM machine...
	I0203 11:35:06.446710  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) creating network...
	I0203 11:35:06.448164  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found existing default KVM network
	I0203 11:35:06.449758  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:06.449571  154810 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:55:1b:bf} reservation:<nil>}
	I0203 11:35:06.451283  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:06.451194  154810 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000280b80}
	I0203 11:35:06.451320  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | created network xml: 
	I0203 11:35:06.451335  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | <network>
	I0203 11:35:06.451352  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG |   <name>mk-kubernetes-upgrade-700839</name>
	I0203 11:35:06.451372  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG |   <dns enable='no'/>
	I0203 11:35:06.451381  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG |   
	I0203 11:35:06.451390  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0203 11:35:06.451414  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG |     <dhcp>
	I0203 11:35:06.451427  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0203 11:35:06.451434  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG |     </dhcp>
	I0203 11:35:06.451440  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG |   </ip>
	I0203 11:35:06.451451  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG |   
	I0203 11:35:06.451468  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | </network>
	I0203 11:35:06.451478  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | 
	I0203 11:35:06.457948  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | trying to create private KVM network mk-kubernetes-upgrade-700839 192.168.50.0/24...
	I0203 11:35:06.582665  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | private KVM network mk-kubernetes-upgrade-700839 192.168.50.0/24 created
	I0203 11:35:06.582831  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) setting up store path in /home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839 ...
	I0203 11:35:06.582866  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:06.582792  154810 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:35:06.582882  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) building disk image from file:///home/jenkins/minikube-integration/20354-109432/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0203 11:35:06.584450  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Downloading /home/jenkins/minikube-integration/20354-109432/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20354-109432/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0203 11:35:06.934421  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:06.934253  154810 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839/id_rsa...
	I0203 11:35:07.268176  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:07.268015  154810 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839/kubernetes-upgrade-700839.rawdisk...
	I0203 11:35:07.268217  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | Writing magic tar header
	I0203 11:35:07.268367  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | Writing SSH key tar header
	I0203 11:35:07.268787  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:07.268722  154810 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839 ...
	I0203 11:35:07.268917  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839
	I0203 11:35:07.268947  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432/.minikube/machines
	I0203 11:35:07.268962  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) setting executable bit set on /home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839 (perms=drwx------)
	I0203 11:35:07.268990  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:35:07.269023  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) setting executable bit set on /home/jenkins/minikube-integration/20354-109432/.minikube/machines (perms=drwxr-xr-x)
	I0203 11:35:07.269046  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432
	I0203 11:35:07.269064  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0203 11:35:07.269081  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | checking permissions on dir: /home/jenkins
	I0203 11:35:07.269092  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) setting executable bit set on /home/jenkins/minikube-integration/20354-109432/.minikube (perms=drwxr-xr-x)
	I0203 11:35:07.269101  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | checking permissions on dir: /home
	I0203 11:35:07.269119  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) setting executable bit set on /home/jenkins/minikube-integration/20354-109432 (perms=drwxrwxr-x)
	I0203 11:35:07.269127  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | skipping /home - not owner
	I0203 11:35:07.269152  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0203 11:35:07.269171  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0203 11:35:07.269183  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) creating domain...
	I0203 11:35:07.270394  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) define libvirt domain using xml: 
	I0203 11:35:07.270421  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) <domain type='kvm'>
	I0203 11:35:07.270432  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)   <name>kubernetes-upgrade-700839</name>
	I0203 11:35:07.270441  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)   <memory unit='MiB'>2200</memory>
	I0203 11:35:07.270449  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)   <vcpu>2</vcpu>
	I0203 11:35:07.270456  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)   <features>
	I0203 11:35:07.270463  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <acpi/>
	I0203 11:35:07.270470  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <apic/>
	I0203 11:35:07.270478  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <pae/>
	I0203 11:35:07.270485  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     
	I0203 11:35:07.270492  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)   </features>
	I0203 11:35:07.270498  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)   <cpu mode='host-passthrough'>
	I0203 11:35:07.270506  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)   
	I0203 11:35:07.270512  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)   </cpu>
	I0203 11:35:07.270520  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)   <os>
	I0203 11:35:07.270527  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <type>hvm</type>
	I0203 11:35:07.270535  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <boot dev='cdrom'/>
	I0203 11:35:07.270542  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <boot dev='hd'/>
	I0203 11:35:07.270551  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <bootmenu enable='no'/>
	I0203 11:35:07.270557  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)   </os>
	I0203 11:35:07.270566  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)   <devices>
	I0203 11:35:07.270573  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <disk type='file' device='cdrom'>
	I0203 11:35:07.270608  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <source file='/home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839/boot2docker.iso'/>
	I0203 11:35:07.270631  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <target dev='hdc' bus='scsi'/>
	I0203 11:35:07.270644  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <readonly/>
	I0203 11:35:07.270662  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     </disk>
	I0203 11:35:07.270675  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <disk type='file' device='disk'>
	I0203 11:35:07.270686  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0203 11:35:07.270704  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <source file='/home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839/kubernetes-upgrade-700839.rawdisk'/>
	I0203 11:35:07.270714  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <target dev='hda' bus='virtio'/>
	I0203 11:35:07.270723  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     </disk>
	I0203 11:35:07.270735  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <interface type='network'>
	I0203 11:35:07.270746  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <source network='mk-kubernetes-upgrade-700839'/>
	I0203 11:35:07.270757  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <model type='virtio'/>
	I0203 11:35:07.270777  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     </interface>
	I0203 11:35:07.270789  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <interface type='network'>
	I0203 11:35:07.270802  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <source network='default'/>
	I0203 11:35:07.270810  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <model type='virtio'/>
	I0203 11:35:07.270822  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     </interface>
	I0203 11:35:07.270832  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <serial type='pty'>
	I0203 11:35:07.270843  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <target port='0'/>
	I0203 11:35:07.270853  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     </serial>
	I0203 11:35:07.270862  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <console type='pty'>
	I0203 11:35:07.270876  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <target type='serial' port='0'/>
	I0203 11:35:07.270888  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     </console>
	I0203 11:35:07.270899  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     <rng model='virtio'>
	I0203 11:35:07.270909  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)       <backend model='random'>/dev/random</backend>
	I0203 11:35:07.270919  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     </rng>
	I0203 11:35:07.270927  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     
	I0203 11:35:07.270936  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)     
	I0203 11:35:07.270947  154759 main.go:141] libmachine: (kubernetes-upgrade-700839)   </devices>
	I0203 11:35:07.270957  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) </domain>
	I0203 11:35:07.270968  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) 
	I0203 11:35:07.275624  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:d9:60:92 in network default
	I0203 11:35:07.276205  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) starting domain...
	I0203 11:35:07.276262  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) ensuring networks are active...
	I0203 11:35:07.276281  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:07.277077  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Ensuring network default is active
	I0203 11:35:07.277429  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Ensuring network mk-kubernetes-upgrade-700839 is active
	I0203 11:35:07.278139  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) getting domain XML...
	I0203 11:35:07.279181  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) creating domain...
	I0203 11:35:08.827435  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) waiting for IP...
	I0203 11:35:08.828598  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:08.829217  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:08.829293  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:08.829210  154810 retry.go:31] will retry after 307.964186ms: waiting for domain to come up
	I0203 11:35:09.139303  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:09.140032  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:09.140081  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:09.140015  154810 retry.go:31] will retry after 327.597204ms: waiting for domain to come up
	I0203 11:35:09.469624  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:09.470145  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:09.470176  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:09.470102  154810 retry.go:31] will retry after 430.368741ms: waiting for domain to come up
	I0203 11:35:09.902717  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:09.903318  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:09.903344  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:09.903285  154810 retry.go:31] will retry after 534.400807ms: waiting for domain to come up
	I0203 11:35:10.439177  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:10.439707  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:10.439729  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:10.439652  154810 retry.go:31] will retry after 733.987382ms: waiting for domain to come up
	I0203 11:35:11.176158  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:11.176877  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:11.176949  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:11.176843  154810 retry.go:31] will retry after 633.638114ms: waiting for domain to come up
	I0203 11:35:11.812465  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:11.813087  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:11.813151  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:11.813068  154810 retry.go:31] will retry after 850.332138ms: waiting for domain to come up
	I0203 11:35:12.665463  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:12.665947  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:12.665978  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:12.665927  154810 retry.go:31] will retry after 1.436857891s: waiting for domain to come up
	I0203 11:35:14.104696  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:14.105249  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:14.105272  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:14.105216  154810 retry.go:31] will retry after 1.172210228s: waiting for domain to come up
	I0203 11:35:15.279276  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:15.279770  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:15.279801  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:15.279738  154810 retry.go:31] will retry after 1.755651302s: waiting for domain to come up
	I0203 11:35:17.037746  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:17.038278  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:17.038338  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:17.038262  154810 retry.go:31] will retry after 2.454133633s: waiting for domain to come up
	I0203 11:35:19.496170  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:19.496695  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:19.496733  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:19.496675  154810 retry.go:31] will retry after 2.562743001s: waiting for domain to come up
	I0203 11:35:22.061332  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:22.061809  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:22.061833  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:22.061783  154810 retry.go:31] will retry after 3.544093307s: waiting for domain to come up
	I0203 11:35:25.606923  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:25.607408  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find current IP address of domain kubernetes-upgrade-700839 in network mk-kubernetes-upgrade-700839
	I0203 11:35:25.607428  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | I0203 11:35:25.607378  154810 retry.go:31] will retry after 3.895624637s: waiting for domain to come up
	I0203 11:35:29.507132  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:29.507748  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) found domain IP: 192.168.50.247
	I0203 11:35:29.507777  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has current primary IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:29.507791  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) reserving static IP address...
	I0203 11:35:29.508223  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-700839", mac: "52:54:00:e8:3d:50", ip: "192.168.50.247"} in network mk-kubernetes-upgrade-700839
	I0203 11:35:29.591143  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | Getting to WaitForSSH function...
	I0203 11:35:29.591175  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) reserved static IP address 192.168.50.247 for domain kubernetes-upgrade-700839
	I0203 11:35:29.591189  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) waiting for SSH...
	I0203 11:35:29.593810  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:29.594439  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:29.594469  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:29.594601  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | Using SSH client type: external
	I0203 11:35:29.594624  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | Using SSH private key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839/id_rsa (-rw-------)
	I0203 11:35:29.594661  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0203 11:35:29.594680  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | About to run SSH command:
	I0203 11:35:29.594695  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | exit 0
	I0203 11:35:29.726274  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | SSH cmd err, output: <nil>: 
	I0203 11:35:29.726623  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) KVM machine creation complete
	I0203 11:35:29.726979  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetConfigRaw
	I0203 11:35:29.727760  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .DriverName
	I0203 11:35:29.728004  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .DriverName
	I0203 11:35:29.728190  154759 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0203 11:35:29.728210  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetState
	I0203 11:35:29.729576  154759 main.go:141] libmachine: Detecting operating system of created instance...
	I0203 11:35:29.729594  154759 main.go:141] libmachine: Waiting for SSH to be available...
	I0203 11:35:29.729599  154759 main.go:141] libmachine: Getting to WaitForSSH function...
	I0203 11:35:29.729605  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:35:29.732127  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:29.732501  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:29.732531  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:29.732651  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHPort
	I0203 11:35:29.732845  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:29.733014  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:29.733140  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHUsername
	I0203 11:35:29.733313  154759 main.go:141] libmachine: Using SSH client type: native
	I0203 11:35:29.733545  154759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.247 22 <nil> <nil>}
	I0203 11:35:29.733558  154759 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0203 11:35:29.841255  154759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:35:29.841283  154759 main.go:141] libmachine: Detecting the provisioner...
	I0203 11:35:29.841294  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:35:29.843954  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:29.844378  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:29.844411  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:29.844525  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHPort
	I0203 11:35:29.844742  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:29.844897  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:29.845028  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHUsername
	I0203 11:35:29.845260  154759 main.go:141] libmachine: Using SSH client type: native
	I0203 11:35:29.845493  154759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.247 22 <nil> <nil>}
	I0203 11:35:29.845512  154759 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0203 11:35:29.954896  154759 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0203 11:35:29.954996  154759 main.go:141] libmachine: found compatible host: buildroot
	I0203 11:35:29.955011  154759 main.go:141] libmachine: Provisioning with buildroot...
	I0203 11:35:29.955025  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetMachineName
	I0203 11:35:29.955330  154759 buildroot.go:166] provisioning hostname "kubernetes-upgrade-700839"
	I0203 11:35:29.955364  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetMachineName
	I0203 11:35:29.955528  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:35:29.958588  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:29.958926  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:29.958954  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:29.959160  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHPort
	I0203 11:35:29.959357  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:29.959537  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:29.959665  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHUsername
	I0203 11:35:29.959837  154759 main.go:141] libmachine: Using SSH client type: native
	I0203 11:35:29.960034  154759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.247 22 <nil> <nil>}
	I0203 11:35:29.960049  154759 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-700839 && echo "kubernetes-upgrade-700839" | sudo tee /etc/hostname
	I0203 11:35:30.083585  154759 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-700839
	
	I0203 11:35:30.083622  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:35:30.086567  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.086858  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:30.086901  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.087123  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHPort
	I0203 11:35:30.087335  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:30.087583  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:30.087755  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHUsername
	I0203 11:35:30.087953  154759 main.go:141] libmachine: Using SSH client type: native
	I0203 11:35:30.088143  154759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.247 22 <nil> <nil>}
	I0203 11:35:30.088162  154759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-700839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-700839/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-700839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:35:30.206622  154759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:35:30.206661  154759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20354-109432/.minikube CaCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20354-109432/.minikube}
	I0203 11:35:30.206689  154759 buildroot.go:174] setting up certificates
	I0203 11:35:30.206707  154759 provision.go:84] configureAuth start
	I0203 11:35:30.206729  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetMachineName
	I0203 11:35:30.207049  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetIP
	I0203 11:35:30.209856  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.210239  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:30.210268  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.210430  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:35:30.212778  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.213104  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:30.213136  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.213247  154759 provision.go:143] copyHostCerts
	I0203 11:35:30.213310  154759 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem, removing ...
	I0203 11:35:30.213330  154759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem
	I0203 11:35:30.213389  154759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem (1078 bytes)
	I0203 11:35:30.213490  154759 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem, removing ...
	I0203 11:35:30.213501  154759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem
	I0203 11:35:30.213522  154759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem (1123 bytes)
	I0203 11:35:30.213605  154759 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem, removing ...
	I0203 11:35:30.213613  154759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem
	I0203 11:35:30.213631  154759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem (1679 bytes)
	I0203 11:35:30.213678  154759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-700839 san=[127.0.0.1 192.168.50.247 kubernetes-upgrade-700839 localhost minikube]
	I0203 11:35:30.492982  154759 provision.go:177] copyRemoteCerts
	I0203 11:35:30.493045  154759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:35:30.493072  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:35:30.496008  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.496353  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:30.496388  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.496606  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHPort
	I0203 11:35:30.496804  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:30.496978  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHUsername
	I0203 11:35:30.497107  154759 sshutil.go:53] new ssh client: &{IP:192.168.50.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839/id_rsa Username:docker}
	I0203 11:35:30.584373  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 11:35:30.608173  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0203 11:35:30.632054  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 11:35:30.654196  154759 provision.go:87] duration metric: took 447.469022ms to configureAuth
	I0203 11:35:30.654233  154759 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:35:30.654399  154759 config.go:182] Loaded profile config "kubernetes-upgrade-700839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0203 11:35:30.654492  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:35:30.657155  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.657541  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:30.657577  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.657772  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHPort
	I0203 11:35:30.657973  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:30.658217  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:30.658369  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHUsername
	I0203 11:35:30.658559  154759 main.go:141] libmachine: Using SSH client type: native
	I0203 11:35:30.658738  154759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.247 22 <nil> <nil>}
	I0203 11:35:30.658759  154759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0203 11:35:30.877862  154759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0203 11:35:30.877892  154759 main.go:141] libmachine: Checking connection to Docker...
	I0203 11:35:30.877906  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetURL
	I0203 11:35:30.879190  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | using libvirt version 6000000
	I0203 11:35:30.881822  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.882395  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:30.882431  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.882599  154759 main.go:141] libmachine: Docker is up and running!
	I0203 11:35:30.882616  154759 main.go:141] libmachine: Reticulating splines...
	I0203 11:35:30.882623  154759 client.go:171] duration metric: took 24.437317022s to LocalClient.Create
	I0203 11:35:30.882647  154759 start.go:167] duration metric: took 24.437402253s to libmachine.API.Create "kubernetes-upgrade-700839"
	I0203 11:35:30.882657  154759 start.go:293] postStartSetup for "kubernetes-upgrade-700839" (driver="kvm2")
	I0203 11:35:30.882667  154759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:35:30.882685  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .DriverName
	I0203 11:35:30.882991  154759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:35:30.883024  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:35:30.885453  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.885734  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:30.885779  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:30.885953  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHPort
	I0203 11:35:30.886153  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:30.886331  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHUsername
	I0203 11:35:30.886438  154759 sshutil.go:53] new ssh client: &{IP:192.168.50.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839/id_rsa Username:docker}
	I0203 11:35:30.972510  154759 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:35:30.976931  154759 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:35:30.976958  154759 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/addons for local assets ...
	I0203 11:35:30.977031  154759 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/files for local assets ...
	I0203 11:35:30.977103  154759 filesync.go:149] local asset: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem -> 1166062.pem in /etc/ssl/certs
	I0203 11:35:30.977202  154759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:35:30.988576  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:35:31.011923  154759 start.go:296] duration metric: took 129.251628ms for postStartSetup
	I0203 11:35:31.012024  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetConfigRaw
	I0203 11:35:31.012650  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetIP
	I0203 11:35:31.015149  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:31.015468  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:31.015497  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:31.015757  154759 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/config.json ...
	I0203 11:35:31.015934  154759 start.go:128] duration metric: took 24.592837791s to createHost
	I0203 11:35:31.015957  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:35:31.018019  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:31.018335  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:31.018366  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:31.018530  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHPort
	I0203 11:35:31.018728  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:31.018881  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:31.019035  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHUsername
	I0203 11:35:31.019176  154759 main.go:141] libmachine: Using SSH client type: native
	I0203 11:35:31.019342  154759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.247 22 <nil> <nil>}
	I0203 11:35:31.019354  154759 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:35:31.130694  154759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738582531.103908551
	
	I0203 11:35:31.130719  154759 fix.go:216] guest clock: 1738582531.103908551
	I0203 11:35:31.130726  154759 fix.go:229] Guest: 2025-02-03 11:35:31.103908551 +0000 UTC Remote: 2025-02-03 11:35:31.015946019 +0000 UTC m=+24.755535475 (delta=87.962532ms)
	I0203 11:35:31.130746  154759 fix.go:200] guest clock delta is within tolerance: 87.962532ms
	I0203 11:35:31.130751  154759 start.go:83] releasing machines lock for "kubernetes-upgrade-700839", held for 24.707742216s
	I0203 11:35:31.130777  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .DriverName
	I0203 11:35:31.131070  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetIP
	I0203 11:35:31.133942  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:31.134321  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:31.134354  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:31.134552  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .DriverName
	I0203 11:35:31.135124  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .DriverName
	I0203 11:35:31.135334  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .DriverName
	I0203 11:35:31.135427  154759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 11:35:31.135477  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:35:31.135585  154759 ssh_runner.go:195] Run: cat /version.json
	I0203 11:35:31.135612  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:35:31.138223  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:31.138614  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:31.138642  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:31.138668  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:31.138854  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHPort
	I0203 11:35:31.139044  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:31.139072  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:31.139103  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:31.139228  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHUsername
	I0203 11:35:31.139247  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHPort
	I0203 11:35:31.139384  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:35:31.139538  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHUsername
	I0203 11:35:31.139556  154759 sshutil.go:53] new ssh client: &{IP:192.168.50.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839/id_rsa Username:docker}
	I0203 11:35:31.139657  154759 sshutil.go:53] new ssh client: &{IP:192.168.50.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839/id_rsa Username:docker}
	I0203 11:35:31.219226  154759 ssh_runner.go:195] Run: systemctl --version
	I0203 11:35:31.252994  154759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0203 11:35:31.420237  154759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 11:35:31.426509  154759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:35:31.426592  154759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:35:31.441792  154759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:35:31.441821  154759 start.go:495] detecting cgroup driver to use...
	I0203 11:35:31.441898  154759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:35:31.457052  154759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:35:31.471977  154759 docker.go:217] disabling cri-docker service (if available) ...
	I0203 11:35:31.472041  154759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0203 11:35:31.485201  154759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0203 11:35:31.498642  154759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0203 11:35:31.612661  154759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0203 11:35:31.750435  154759 docker.go:233] disabling docker service ...
	I0203 11:35:31.750521  154759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0203 11:35:31.765976  154759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0203 11:35:31.781157  154759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0203 11:35:31.932735  154759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0203 11:35:32.066995  154759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0203 11:35:32.080700  154759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:35:32.099001  154759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0203 11:35:32.099073  154759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:35:32.110212  154759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0203 11:35:32.110315  154759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:35:32.121224  154759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:35:32.133314  154759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:35:32.146187  154759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:35:32.157156  154759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:35:32.167274  154759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:35:32.167336  154759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:35:32.180294  154759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:35:32.191250  154759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:35:32.319935  154759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0203 11:35:32.427502  154759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0203 11:35:32.427585  154759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0203 11:35:32.432144  154759 start.go:563] Will wait 60s for crictl version
	I0203 11:35:32.432213  154759 ssh_runner.go:195] Run: which crictl
	I0203 11:35:32.436013  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:35:32.482424  154759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0203 11:35:32.482507  154759 ssh_runner.go:195] Run: crio --version
	I0203 11:35:32.513865  154759 ssh_runner.go:195] Run: crio --version
	I0203 11:35:32.545988  154759 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0203 11:35:32.547368  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetIP
	I0203 11:35:32.550901  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:32.551308  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:35:22 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:35:32.551347  154759 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:35:32.551676  154759 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0203 11:35:32.556737  154759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:35:32.568994  154759 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-700839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-700839 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 11:35:32.569201  154759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 11:35:32.569271  154759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:35:32.611491  154759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0203 11:35:32.611572  154759 ssh_runner.go:195] Run: which lz4
	I0203 11:35:32.615678  154759 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0203 11:35:32.620110  154759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 11:35:32.620158  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0203 11:35:34.226729  154759 crio.go:462] duration metric: took 1.611085996s to copy over tarball
	I0203 11:35:34.226826  154759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0203 11:35:36.895984  154759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.669115464s)
	I0203 11:35:36.896013  154759 crio.go:469] duration metric: took 2.669244383s to extract the tarball
	I0203 11:35:36.896023  154759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0203 11:35:36.939372  154759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:35:36.986899  154759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0203 11:35:36.986930  154759 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0203 11:35:36.986985  154759 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:35:36.987050  154759 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0203 11:35:36.987065  154759 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:35:36.987083  154759 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0203 11:35:36.987026  154759 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0203 11:35:36.987107  154759 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:35:36.987158  154759 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:35:36.987449  154759 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:35:36.988424  154759 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0203 11:35:36.988489  154759 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:35:36.988428  154759 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0203 11:35:36.988512  154759 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:35:36.988430  154759 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0203 11:35:36.988429  154759 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:35:36.988429  154759 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:35:36.988693  154759 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:35:37.197632  154759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0203 11:35:37.205719  154759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:35:37.215844  154759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:35:37.215966  154759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:35:37.228550  154759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0203 11:35:37.242659  154759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0203 11:35:37.269291  154759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:35:37.280134  154759 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0203 11:35:37.280187  154759 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0203 11:35:37.280234  154759 ssh_runner.go:195] Run: which crictl
	I0203 11:35:37.310710  154759 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0203 11:35:37.310771  154759 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:35:37.310828  154759 ssh_runner.go:195] Run: which crictl
	I0203 11:35:37.360578  154759 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0203 11:35:37.360621  154759 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0203 11:35:37.360637  154759 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:35:37.360639  154759 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:35:37.360688  154759 ssh_runner.go:195] Run: which crictl
	I0203 11:35:37.360688  154759 ssh_runner.go:195] Run: which crictl
	I0203 11:35:37.367733  154759 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0203 11:35:37.367790  154759 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0203 11:35:37.367797  154759 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0203 11:35:37.367843  154759 ssh_runner.go:195] Run: which crictl
	I0203 11:35:37.367846  154759 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0203 11:35:37.367940  154759 ssh_runner.go:195] Run: which crictl
	I0203 11:35:37.387359  154759 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0203 11:35:37.387406  154759 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:35:37.387455  154759 ssh_runner.go:195] Run: which crictl
	I0203 11:35:37.387471  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0203 11:35:37.387560  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:35:37.387576  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:35:37.387648  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:35:37.387683  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0203 11:35:37.387656  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0203 11:35:37.505292  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:35:37.505292  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0203 11:35:37.505327  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0203 11:35:37.517238  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:35:37.517261  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:35:37.517261  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:35:37.517341  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0203 11:35:37.636406  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:35:37.636444  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0203 11:35:37.636542  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0203 11:35:37.660895  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0203 11:35:37.688997  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:35:37.689076  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:35:37.689104  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:35:37.783318  154759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0203 11:35:37.783446  154759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0203 11:35:37.783472  154759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:35:37.820306  154759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0203 11:35:37.820390  154759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0203 11:35:37.824241  154759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0203 11:35:37.824353  154759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0203 11:35:37.845862  154759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0203 11:35:38.100452  154759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:35:38.247655  154759 cache_images.go:92] duration metric: took 1.260707452s to LoadCachedImages
	W0203 11:35:38.247753  154759 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0203 11:35:38.247772  154759 kubeadm.go:934] updating node { 192.168.50.247 8443 v1.20.0 crio true true} ...
	I0203 11:35:38.247910  154759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-700839 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-700839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:35:38.248002  154759 ssh_runner.go:195] Run: crio config
	I0203 11:35:38.297118  154759 cni.go:84] Creating CNI manager for ""
	I0203 11:35:38.297148  154759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:35:38.297161  154759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 11:35:38.297189  154759 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.247 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-700839 NodeName:kubernetes-upgrade-700839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0203 11:35:38.297392  154759 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-700839"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 11:35:38.297477  154759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0203 11:35:38.308624  154759 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 11:35:38.308703  154759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 11:35:38.317757  154759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0203 11:35:38.334036  154759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:35:38.350761  154759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0203 11:35:38.367191  154759 ssh_runner.go:195] Run: grep 192.168.50.247	control-plane.minikube.internal$ /etc/hosts
	I0203 11:35:38.371372  154759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:35:38.384815  154759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:35:38.494187  154759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:35:38.512334  154759 certs.go:68] Setting up /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839 for IP: 192.168.50.247
	I0203 11:35:38.512366  154759 certs.go:194] generating shared ca certs ...
	I0203 11:35:38.512383  154759 certs.go:226] acquiring lock for ca certs: {Name:mkceafe81f89678b7cbc2a7f6faab4e784fcb207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:35:38.512576  154759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key
	I0203 11:35:38.512639  154759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key
	I0203 11:35:38.512653  154759 certs.go:256] generating profile certs ...
	I0203 11:35:38.512726  154759 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/client.key
	I0203 11:35:38.512746  154759 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/client.crt with IP's: []
	I0203 11:35:38.620213  154759 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/client.crt ...
	I0203 11:35:38.620243  154759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/client.crt: {Name:mkfb909bdb9b08ba38de19994cdcb94fae48e111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:35:38.620413  154759 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/client.key ...
	I0203 11:35:38.620427  154759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/client.key: {Name:mk874e8551860f8c006286f7dd9c1952ef1d84d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:35:38.620505  154759 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/apiserver.key.ab3d452b
	I0203 11:35:38.620522  154759 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/apiserver.crt.ab3d452b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.247]
	I0203 11:35:39.020322  154759 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/apiserver.crt.ab3d452b ...
	I0203 11:35:39.020363  154759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/apiserver.crt.ab3d452b: {Name:mk77e8e3921de6e8f621d19ec4fb9d8bb9b34311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:35:39.020569  154759 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/apiserver.key.ab3d452b ...
	I0203 11:35:39.020590  154759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/apiserver.key.ab3d452b: {Name:mkca1248caa717fe29718a19634e18aee23860ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:35:39.020682  154759 certs.go:381] copying /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/apiserver.crt.ab3d452b -> /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/apiserver.crt
	I0203 11:35:39.020775  154759 certs.go:385] copying /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/apiserver.key.ab3d452b -> /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/apiserver.key
	I0203 11:35:39.020852  154759 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/proxy-client.key
	I0203 11:35:39.020875  154759 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/proxy-client.crt with IP's: []
	I0203 11:35:39.075046  154759 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/proxy-client.crt ...
	I0203 11:35:39.075084  154759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/proxy-client.crt: {Name:mk094a6383880e6d6dd1e96d0d876e449a763b35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:35:39.075275  154759 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/proxy-client.key ...
	I0203 11:35:39.075296  154759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/proxy-client.key: {Name:mk116c3ffb17a16a49b04befee85dd5caeea17c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:35:39.075570  154759 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem (1338 bytes)
	W0203 11:35:39.075633  154759 certs.go:480] ignoring /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606_empty.pem, impossibly tiny 0 bytes
	I0203 11:35:39.075645  154759 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 11:35:39.075675  154759 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem (1078 bytes)
	I0203 11:35:39.075707  154759 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem (1123 bytes)
	I0203 11:35:39.075734  154759 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem (1679 bytes)
	I0203 11:35:39.075791  154759 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:35:39.076468  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:35:39.103374  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 11:35:39.127803  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:35:39.152529  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 11:35:39.179482  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0203 11:35:39.204357  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 11:35:39.229263  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:35:39.254666  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 11:35:39.282599  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /usr/share/ca-certificates/1166062.pem (1708 bytes)
	I0203 11:35:39.309161  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:35:39.338646  154759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem --> /usr/share/ca-certificates/116606.pem (1338 bytes)
	I0203 11:35:39.382389  154759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:35:39.413391  154759 ssh_runner.go:195] Run: openssl version
	I0203 11:35:39.422977  154759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1166062.pem && ln -fs /usr/share/ca-certificates/1166062.pem /etc/ssl/certs/1166062.pem"
	I0203 11:35:39.435210  154759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1166062.pem
	I0203 11:35:39.440433  154759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:41 /usr/share/ca-certificates/1166062.pem
	I0203 11:35:39.440506  154759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1166062.pem
	I0203 11:35:39.447412  154759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1166062.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:35:39.459246  154759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:35:39.471493  154759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:35:39.476361  154759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:35:39.476421  154759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:35:39.483303  154759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:35:39.495124  154759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116606.pem && ln -fs /usr/share/ca-certificates/116606.pem /etc/ssl/certs/116606.pem"
	I0203 11:35:39.507405  154759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116606.pem
	I0203 11:35:39.512125  154759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:41 /usr/share/ca-certificates/116606.pem
	I0203 11:35:39.512184  154759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116606.pem
	I0203 11:35:39.518333  154759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116606.pem /etc/ssl/certs/51391683.0"
	I0203 11:35:39.530504  154759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:35:39.535313  154759 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 11:35:39.535376  154759 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-700839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-700839 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.247 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:35:39.535455  154759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 11:35:39.535504  154759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:35:39.575099  154759 cri.go:89] found id: ""
	I0203 11:35:39.575180  154759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 11:35:39.586497  154759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:35:39.597834  154759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:35:39.608241  154759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:35:39.608271  154759 kubeadm.go:157] found existing configuration files:
	
	I0203 11:35:39.608328  154759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:35:39.617809  154759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:35:39.617871  154759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:35:39.628039  154759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:35:39.637279  154759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:35:39.637351  154759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:35:39.647348  154759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:35:39.656945  154759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:35:39.657039  154759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:35:39.667599  154759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:35:39.677833  154759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:35:39.677919  154759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:35:39.688628  154759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:35:39.955711  154759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:37:37.968778  154759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 11:37:37.968895  154759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0203 11:37:37.971086  154759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0203 11:37:37.971162  154759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:37:37.971235  154759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:37:37.971308  154759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:37:37.971430  154759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 11:37:37.971518  154759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:37:37.972823  154759 out.go:235]   - Generating certificates and keys ...
	I0203 11:37:37.972909  154759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:37:37.972964  154759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:37:37.973055  154759 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 11:37:37.973131  154759 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0203 11:37:37.973185  154759 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0203 11:37:37.973233  154759 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0203 11:37:37.973279  154759 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0203 11:37:37.973468  154759 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-700839 localhost] and IPs [192.168.50.247 127.0.0.1 ::1]
	I0203 11:37:37.973521  154759 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0203 11:37:37.973645  154759 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-700839 localhost] and IPs [192.168.50.247 127.0.0.1 ::1]
	I0203 11:37:37.973725  154759 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 11:37:37.973802  154759 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 11:37:37.973853  154759 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0203 11:37:37.973920  154759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:37:37.974018  154759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:37:37.974105  154759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:37:37.974200  154759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:37:37.974291  154759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:37:37.974445  154759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:37:37.974569  154759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:37:37.974640  154759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:37:37.974751  154759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:37:37.976085  154759 out.go:235]   - Booting up control plane ...
	I0203 11:37:37.976192  154759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:37:37.976284  154759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:37:37.976373  154759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:37:37.976488  154759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:37:37.976632  154759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 11:37:37.976685  154759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0203 11:37:37.976739  154759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:37:37.976946  154759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:37:37.977015  154759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:37:37.977186  154759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:37:37.977256  154759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:37:37.977438  154759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:37:37.977504  154759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:37:37.977680  154759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:37:37.977763  154759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:37:37.978032  154759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:37:37.978042  154759 kubeadm.go:310] 
	I0203 11:37:37.978074  154759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0203 11:37:37.978108  154759 kubeadm.go:310] 		timed out waiting for the condition
	I0203 11:37:37.978115  154759 kubeadm.go:310] 
	I0203 11:37:37.978147  154759 kubeadm.go:310] 	This error is likely caused by:
	I0203 11:37:37.978205  154759 kubeadm.go:310] 		- The kubelet is not running
	I0203 11:37:37.978300  154759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 11:37:37.978307  154759 kubeadm.go:310] 
	I0203 11:37:37.978399  154759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 11:37:37.978429  154759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0203 11:37:37.978472  154759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0203 11:37:37.978480  154759 kubeadm.go:310] 
	I0203 11:37:37.978586  154759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 11:37:37.978653  154759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 11:37:37.978659  154759 kubeadm.go:310] 
	I0203 11:37:37.978750  154759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0203 11:37:37.978824  154759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0203 11:37:37.978918  154759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0203 11:37:37.978989  154759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0203 11:37:37.979065  154759 kubeadm.go:310] 
	W0203 11:37:37.979150  154759 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-700839 localhost] and IPs [192.168.50.247 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-700839 localhost] and IPs [192.168.50.247 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-700839 localhost] and IPs [192.168.50.247 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-700839 localhost] and IPs [192.168.50.247 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0203 11:37:37.979197  154759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0203 11:37:39.807048  154759 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.827819219s)
	I0203 11:37:39.807137  154759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:37:39.820983  154759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:37:39.830528  154759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:37:39.830555  154759 kubeadm.go:157] found existing configuration files:
	
	I0203 11:37:39.830607  154759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:37:39.839465  154759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:37:39.839528  154759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:37:39.849137  154759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:37:39.857890  154759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:37:39.857958  154759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:37:39.867209  154759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:37:39.876181  154759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:37:39.876262  154759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:37:39.885548  154759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:37:39.894834  154759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:37:39.894890  154759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:37:39.904102  154759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:37:39.969907  154759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0203 11:37:39.970035  154759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:37:40.115258  154759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:37:40.115436  154759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:37:40.115596  154759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 11:37:40.318168  154759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:37:40.320001  154759 out.go:235]   - Generating certificates and keys ...
	I0203 11:37:40.320122  154759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:37:40.321720  154759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:37:40.321836  154759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 11:37:40.321919  154759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0203 11:37:40.322021  154759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 11:37:40.322098  154759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0203 11:37:40.322203  154759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0203 11:37:40.322321  154759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0203 11:37:40.322441  154759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 11:37:40.322550  154759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 11:37:40.322636  154759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0203 11:37:40.322743  154759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:37:40.584225  154759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:37:40.803847  154759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:37:40.880506  154759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:37:40.995186  154759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:37:41.009935  154759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:37:41.011090  154759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:37:41.011174  154759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:37:41.154203  154759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:37:41.155892  154759 out.go:235]   - Booting up control plane ...
	I0203 11:37:41.156024  154759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:37:41.167755  154759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:37:41.170277  154759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:37:41.170401  154759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:37:41.171510  154759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 11:38:21.173914  154759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0203 11:38:21.174359  154759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:38:21.174579  154759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:38:26.174926  154759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:38:26.175236  154759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:38:36.175639  154759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:38:36.175899  154759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:38:56.174935  154759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:38:56.175211  154759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:39:36.175140  154759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:39:36.175407  154759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:39:36.175429  154759 kubeadm.go:310] 
	I0203 11:39:36.175492  154759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0203 11:39:36.175645  154759 kubeadm.go:310] 		timed out waiting for the condition
	I0203 11:39:36.175659  154759 kubeadm.go:310] 
	I0203 11:39:36.175702  154759 kubeadm.go:310] 	This error is likely caused by:
	I0203 11:39:36.175738  154759 kubeadm.go:310] 		- The kubelet is not running
	I0203 11:39:36.175870  154759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 11:39:36.175879  154759 kubeadm.go:310] 
	I0203 11:39:36.176020  154759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 11:39:36.176067  154759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0203 11:39:36.176110  154759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0203 11:39:36.176118  154759 kubeadm.go:310] 
	I0203 11:39:36.176252  154759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 11:39:36.176367  154759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 11:39:36.176392  154759 kubeadm.go:310] 
	I0203 11:39:36.176481  154759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0203 11:39:36.176552  154759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0203 11:39:36.176615  154759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0203 11:39:36.176693  154759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0203 11:39:36.176699  154759 kubeadm.go:310] 
	I0203 11:39:36.179582  154759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:39:36.179708  154759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 11:39:36.179826  154759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0203 11:39:36.179900  154759 kubeadm.go:394] duration metric: took 3m56.644526904s to StartCluster
	I0203 11:39:36.179968  154759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:39:36.180016  154759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:39:36.241048  154759 cri.go:89] found id: ""
	I0203 11:39:36.241075  154759 logs.go:282] 0 containers: []
	W0203 11:39:36.241085  154759 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:39:36.241092  154759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:39:36.241144  154759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:39:36.294093  154759 cri.go:89] found id: ""
	I0203 11:39:36.294131  154759 logs.go:282] 0 containers: []
	W0203 11:39:36.294143  154759 logs.go:284] No container was found matching "etcd"
	I0203 11:39:36.294152  154759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:39:36.294222  154759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:39:36.341536  154759 cri.go:89] found id: ""
	I0203 11:39:36.341569  154759 logs.go:282] 0 containers: []
	W0203 11:39:36.341580  154759 logs.go:284] No container was found matching "coredns"
	I0203 11:39:36.341588  154759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:39:36.341667  154759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:39:36.386920  154759 cri.go:89] found id: ""
	I0203 11:39:36.386952  154759 logs.go:282] 0 containers: []
	W0203 11:39:36.386963  154759 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:39:36.386972  154759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:39:36.387039  154759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:39:36.424967  154759 cri.go:89] found id: ""
	I0203 11:39:36.424989  154759 logs.go:282] 0 containers: []
	W0203 11:39:36.424997  154759 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:39:36.425003  154759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:39:36.425047  154759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:39:36.468444  154759 cri.go:89] found id: ""
	I0203 11:39:36.468472  154759 logs.go:282] 0 containers: []
	W0203 11:39:36.468484  154759 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:39:36.468492  154759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:39:36.468557  154759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:39:36.519707  154759 cri.go:89] found id: ""
	I0203 11:39:36.519739  154759 logs.go:282] 0 containers: []
	W0203 11:39:36.519749  154759 logs.go:284] No container was found matching "kindnet"
	I0203 11:39:36.519763  154759 logs.go:123] Gathering logs for dmesg ...
	I0203 11:39:36.519786  154759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:39:36.537429  154759 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:39:36.537469  154759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:39:36.719324  154759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:39:36.719346  154759 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:39:36.719358  154759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:39:36.839763  154759 logs.go:123] Gathering logs for container status ...
	I0203 11:39:36.839818  154759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:39:36.882611  154759 logs.go:123] Gathering logs for kubelet ...
	I0203 11:39:36.882648  154759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0203 11:39:36.943266  154759 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0203 11:39:36.943327  154759 out.go:270] * 
	* 
	W0203 11:39:36.943401  154759 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 11:39:36.943417  154759 out.go:270] * 
	* 
	W0203 11:39:36.944423  154759 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 11:39:36.947109  154759 out.go:201] 
	W0203 11:39:36.948209  154759 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 11:39:36.948269  154759 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0203 11:39:36.948303  154759 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0203 11:39:36.949717  154759 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-700839 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-700839
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-700839: (1.478644815s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-700839 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-700839 status --format={{.Host}}: exit status 7 (77.504539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-700839 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-700839 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.820052564s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-700839 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-700839 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-700839 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (116.652642ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-700839] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-700839
	    minikube start -p kubernetes-upgrade-700839 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7008392 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-700839 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-700839 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-700839 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m21.972480125s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-02-03 11:41:45.572538599 +0000 UTC m=+4142.485957930
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-700839 -n kubernetes-upgrade-700839
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-700839 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-700839 logs -n 25: (1.733103262s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018 sudo cat                | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018 sudo cat                | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018 sudo cat                | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-927018                         | enable-default-cni-927018 | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC | 03 Feb 25 11:41 UTC |
	| start   | -p old-k8s-version-517711                            | old-k8s-version-517711    | jenkins | v1.35.0 | 03 Feb 25 11:41 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 11:41:26
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 11:41:26.865901  166532 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:41:26.866066  166532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:41:26.866091  166532 out.go:358] Setting ErrFile to fd 2...
	I0203 11:41:26.866095  166532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:41:26.866252  166532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:41:26.866823  166532 out.go:352] Setting JSON to false
	I0203 11:41:26.868077  166532 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8629,"bootTime":1738574258,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 11:41:26.868191  166532 start.go:139] virtualization: kvm guest
	I0203 11:41:26.870359  166532 out.go:177] * [old-k8s-version-517711] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 11:41:26.871528  166532 notify.go:220] Checking for updates...
	I0203 11:41:26.871551  166532 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:41:26.872770  166532 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:41:26.874106  166532 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:41:26.875191  166532 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:41:26.876242  166532 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 11:41:26.877308  166532 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:41:26.878824  166532 config.go:182] Loaded profile config "bridge-927018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:41:26.878926  166532 config.go:182] Loaded profile config "flannel-927018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:41:26.879006  166532 config.go:182] Loaded profile config "kubernetes-upgrade-700839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:41:26.879091  166532 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:41:26.915059  166532 out.go:177] * Using the kvm2 driver based on user configuration
	I0203 11:41:26.916220  166532 start.go:297] selected driver: kvm2
	I0203 11:41:26.916232  166532 start.go:901] validating driver "kvm2" against <nil>
	I0203 11:41:26.916259  166532 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:41:26.916946  166532 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:41:26.917045  166532 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 11:41:26.932544  166532 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 11:41:26.932607  166532 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 11:41:26.932891  166532 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:41:26.932926  166532 cni.go:84] Creating CNI manager for ""
	I0203 11:41:26.932986  166532 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:41:26.932998  166532 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0203 11:41:26.933057  166532 start.go:340] cluster config:
	{Name:old-k8s-version-517711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-517711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:41:26.933172  166532 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:41:26.934976  166532 out.go:177] * Starting "old-k8s-version-517711" primary control-plane node in "old-k8s-version-517711" cluster
	I0203 11:41:23.788447  163102 node_ready.go:53] node "flannel-927018" has status "Ready":"False"
	I0203 11:41:25.789034  163102 node_ready.go:49] node "flannel-927018" has status "Ready":"True"
	I0203 11:41:25.789061  163102 node_ready.go:38] duration metric: took 11.004397938s for node "flannel-927018" to be "Ready" ...
	I0203 11:41:25.789074  163102 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:41:25.800368  163102 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-5zwd6" in "kube-system" namespace to be "Ready" ...
	I0203 11:41:27.807156  163102 pod_ready.go:103] pod "coredns-668d6bf9bc-5zwd6" in "kube-system" namespace has status "Ready":"False"
	I0203 11:41:26.936291  166532 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 11:41:26.936338  166532 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0203 11:41:26.936357  166532 cache.go:56] Caching tarball of preloaded images
	I0203 11:41:26.936783  166532 preload.go:172] Found /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0203 11:41:26.936815  166532 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0203 11:41:26.936945  166532 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/config.json ...
	I0203 11:41:26.936977  166532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/config.json: {Name:mk4dcbba19913098aa7d6976ed46cfbb452fb29b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:41:26.937263  166532 start.go:360] acquireMachinesLock for old-k8s-version-517711: {Name:mk4d774b88f87fe0539ca3e30dd98aae8a4d5437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:41:28.590854  166532 start.go:364] duration metric: took 1.65352527s to acquireMachinesLock for "old-k8s-version-517711"
	I0203 11:41:28.590941  166532 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-517711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-517711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 11:41:28.591040  166532 start.go:125] createHost starting for "" (driver="kvm2")
	I0203 11:41:27.051218  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.051746  164707 main.go:141] libmachine: (bridge-927018) found domain IP: 192.168.39.220
	I0203 11:41:27.051771  164707 main.go:141] libmachine: (bridge-927018) reserving static IP address...
	I0203 11:41:27.051785  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has current primary IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.052257  164707 main.go:141] libmachine: (bridge-927018) DBG | unable to find host DHCP lease matching {name: "bridge-927018", mac: "52:54:00:58:2e:e6", ip: "192.168.39.220"} in network mk-bridge-927018
	I0203 11:41:27.142619  164707 main.go:141] libmachine: (bridge-927018) reserved static IP address 192.168.39.220 for domain bridge-927018
	I0203 11:41:27.142647  164707 main.go:141] libmachine: (bridge-927018) waiting for SSH...
	I0203 11:41:27.142668  164707 main.go:141] libmachine: (bridge-927018) DBG | Getting to WaitForSSH function...
	I0203 11:41:27.145219  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.145701  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:27.145735  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.145845  164707 main.go:141] libmachine: (bridge-927018) DBG | Using SSH client type: external
	I0203 11:41:27.145872  164707 main.go:141] libmachine: (bridge-927018) DBG | Using SSH private key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/bridge-927018/id_rsa (-rw-------)
	I0203 11:41:27.145907  164707 main.go:141] libmachine: (bridge-927018) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20354-109432/.minikube/machines/bridge-927018/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0203 11:41:27.145926  164707 main.go:141] libmachine: (bridge-927018) DBG | About to run SSH command:
	I0203 11:41:27.145941  164707 main.go:141] libmachine: (bridge-927018) DBG | exit 0
	I0203 11:41:27.283051  164707 main.go:141] libmachine: (bridge-927018) DBG | SSH cmd err, output: <nil>: 
	I0203 11:41:27.283380  164707 main.go:141] libmachine: (bridge-927018) KVM machine creation complete
	I0203 11:41:27.283690  164707 main.go:141] libmachine: (bridge-927018) Calling .GetConfigRaw
	I0203 11:41:27.284240  164707 main.go:141] libmachine: (bridge-927018) Calling .DriverName
	I0203 11:41:27.284401  164707 main.go:141] libmachine: (bridge-927018) Calling .DriverName
	I0203 11:41:27.284523  164707 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0203 11:41:27.284541  164707 main.go:141] libmachine: (bridge-927018) Calling .GetState
	I0203 11:41:27.285822  164707 main.go:141] libmachine: Detecting operating system of created instance...
	I0203 11:41:27.285840  164707 main.go:141] libmachine: Waiting for SSH to be available...
	I0203 11:41:27.285848  164707 main.go:141] libmachine: Getting to WaitForSSH function...
	I0203 11:41:27.285857  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHHostname
	I0203 11:41:27.289071  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.289486  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:27.289521  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.289680  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHPort
	I0203 11:41:27.289891  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:27.290093  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:27.290272  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHUsername
	I0203 11:41:27.290469  164707 main.go:141] libmachine: Using SSH client type: native
	I0203 11:41:27.290706  164707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0203 11:41:27.290722  164707 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0203 11:41:27.409396  164707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:41:27.409423  164707 main.go:141] libmachine: Detecting the provisioner...
	I0203 11:41:27.409431  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHHostname
	I0203 11:41:27.412489  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.412919  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:27.412962  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.413186  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHPort
	I0203 11:41:27.413388  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:27.413551  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:27.413685  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHUsername
	I0203 11:41:27.413866  164707 main.go:141] libmachine: Using SSH client type: native
	I0203 11:41:27.414109  164707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0203 11:41:27.414122  164707 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0203 11:41:27.527056  164707 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0203 11:41:27.527135  164707 main.go:141] libmachine: found compatible host: buildroot
	I0203 11:41:27.527150  164707 main.go:141] libmachine: Provisioning with buildroot...
	I0203 11:41:27.527175  164707 main.go:141] libmachine: (bridge-927018) Calling .GetMachineName
	I0203 11:41:27.527461  164707 buildroot.go:166] provisioning hostname "bridge-927018"
	I0203 11:41:27.527490  164707 main.go:141] libmachine: (bridge-927018) Calling .GetMachineName
	I0203 11:41:27.527707  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHHostname
	I0203 11:41:27.530227  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.530595  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:27.530620  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.530774  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHPort
	I0203 11:41:27.530940  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:27.531095  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:27.531228  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHUsername
	I0203 11:41:27.531379  164707 main.go:141] libmachine: Using SSH client type: native
	I0203 11:41:27.531580  164707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0203 11:41:27.531595  164707 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-927018 && echo "bridge-927018" | sudo tee /etc/hostname
	I0203 11:41:27.664471  164707 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-927018
	
	I0203 11:41:27.664505  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHHostname
	I0203 11:41:27.667602  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.668025  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:27.668051  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.668285  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHPort
	I0203 11:41:27.668448  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:27.668593  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:27.668724  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHUsername
	I0203 11:41:27.668917  164707 main.go:141] libmachine: Using SSH client type: native
	I0203 11:41:27.669091  164707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0203 11:41:27.669107  164707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-927018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-927018/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-927018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:41:27.791634  164707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:41:27.791678  164707 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20354-109432/.minikube CaCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20354-109432/.minikube}
	I0203 11:41:27.791710  164707 buildroot.go:174] setting up certificates
	I0203 11:41:27.791723  164707 provision.go:84] configureAuth start
	I0203 11:41:27.791740  164707 main.go:141] libmachine: (bridge-927018) Calling .GetMachineName
	I0203 11:41:27.792058  164707 main.go:141] libmachine: (bridge-927018) Calling .GetIP
	I0203 11:41:27.794406  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.794761  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:27.794783  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.794915  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHHostname
	I0203 11:41:27.797198  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.797556  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:27.797581  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.797729  164707 provision.go:143] copyHostCerts
	I0203 11:41:27.797795  164707 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem, removing ...
	I0203 11:41:27.797819  164707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem
	I0203 11:41:27.797891  164707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem (1078 bytes)
	I0203 11:41:27.798077  164707 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem, removing ...
	I0203 11:41:27.798092  164707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem
	I0203 11:41:27.798131  164707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem (1123 bytes)
	I0203 11:41:27.798213  164707 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem, removing ...
	I0203 11:41:27.798225  164707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem
	I0203 11:41:27.798252  164707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem (1679 bytes)
	I0203 11:41:27.798320  164707 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem org=jenkins.bridge-927018 san=[127.0.0.1 192.168.39.220 bridge-927018 localhost minikube]
	I0203 11:41:27.930916  164707 provision.go:177] copyRemoteCerts
	I0203 11:41:27.930974  164707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:41:27.931001  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHHostname
	I0203 11:41:27.933992  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.934313  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:27.934363  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:27.934528  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHPort
	I0203 11:41:27.934716  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:27.934861  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHUsername
	I0203 11:41:27.934986  164707 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/bridge-927018/id_rsa Username:docker}
	I0203 11:41:28.021122  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 11:41:28.044838  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0203 11:41:28.069291  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 11:41:28.092594  164707 provision.go:87] duration metric: took 300.853728ms to configureAuth
	I0203 11:41:28.092628  164707 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:41:28.092803  164707 config.go:182] Loaded profile config "bridge-927018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:41:28.092897  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHHostname
	I0203 11:41:28.095756  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.096213  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:28.096253  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.096380  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHPort
	I0203 11:41:28.096578  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:28.096757  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:28.096900  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHUsername
	I0203 11:41:28.097092  164707 main.go:141] libmachine: Using SSH client type: native
	I0203 11:41:28.097257  164707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0203 11:41:28.097271  164707 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0203 11:41:28.334386  164707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0203 11:41:28.334414  164707 main.go:141] libmachine: Checking connection to Docker...
	I0203 11:41:28.334425  164707 main.go:141] libmachine: (bridge-927018) Calling .GetURL
	I0203 11:41:28.335740  164707 main.go:141] libmachine: (bridge-927018) DBG | using libvirt version 6000000
	I0203 11:41:28.338097  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.338434  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:28.338466  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.338603  164707 main.go:141] libmachine: Docker is up and running!
	I0203 11:41:28.338617  164707 main.go:141] libmachine: Reticulating splines...
	I0203 11:41:28.338625  164707 client.go:171] duration metric: took 27.318144059s to LocalClient.Create
	I0203 11:41:28.338648  164707 start.go:167] duration metric: took 27.318205246s to libmachine.API.Create "bridge-927018"
	I0203 11:41:28.338659  164707 start.go:293] postStartSetup for "bridge-927018" (driver="kvm2")
	I0203 11:41:28.338668  164707 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:41:28.338685  164707 main.go:141] libmachine: (bridge-927018) Calling .DriverName
	I0203 11:41:28.338931  164707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:41:28.338965  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHHostname
	I0203 11:41:28.341234  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.341555  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:28.341597  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.341747  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHPort
	I0203 11:41:28.341942  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:28.342145  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHUsername
	I0203 11:41:28.342275  164707 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/bridge-927018/id_rsa Username:docker}
	I0203 11:41:28.429399  164707 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:41:28.433503  164707 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:41:28.433533  164707 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/addons for local assets ...
	I0203 11:41:28.433619  164707 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/files for local assets ...
	I0203 11:41:28.433719  164707 filesync.go:149] local asset: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem -> 1166062.pem in /etc/ssl/certs
	I0203 11:41:28.433816  164707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:41:28.443290  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:41:28.468862  164707 start.go:296] duration metric: took 130.186124ms for postStartSetup
	I0203 11:41:28.468918  164707 main.go:141] libmachine: (bridge-927018) Calling .GetConfigRaw
	I0203 11:41:28.469478  164707 main.go:141] libmachine: (bridge-927018) Calling .GetIP
	I0203 11:41:28.472447  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.472821  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:28.472856  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.473081  164707 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/config.json ...
	I0203 11:41:28.473272  164707 start.go:128] duration metric: took 27.516881732s to createHost
	I0203 11:41:28.473297  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHHostname
	I0203 11:41:28.475784  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.476164  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:28.476194  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.476351  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHPort
	I0203 11:41:28.476573  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:28.476752  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:28.476947  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHUsername
	I0203 11:41:28.477102  164707 main.go:141] libmachine: Using SSH client type: native
	I0203 11:41:28.477285  164707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0203 11:41:28.477299  164707 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:41:28.590663  164707 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738582888.548373849
	
	I0203 11:41:28.590690  164707 fix.go:216] guest clock: 1738582888.548373849
	I0203 11:41:28.590699  164707 fix.go:229] Guest: 2025-02-03 11:41:28.548373849 +0000 UTC Remote: 2025-02-03 11:41:28.473286077 +0000 UTC m=+48.720465555 (delta=75.087772ms)
	I0203 11:41:28.590739  164707 fix.go:200] guest clock delta is within tolerance: 75.087772ms
	I0203 11:41:28.590745  164707 start.go:83] releasing machines lock for "bridge-927018", held for 27.634529223s
	I0203 11:41:28.590771  164707 main.go:141] libmachine: (bridge-927018) Calling .DriverName
	I0203 11:41:28.591058  164707 main.go:141] libmachine: (bridge-927018) Calling .GetIP
	I0203 11:41:28.594182  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.594500  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:28.594529  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.594741  164707 main.go:141] libmachine: (bridge-927018) Calling .DriverName
	I0203 11:41:28.595204  164707 main.go:141] libmachine: (bridge-927018) Calling .DriverName
	I0203 11:41:28.595374  164707 main.go:141] libmachine: (bridge-927018) Calling .DriverName
	I0203 11:41:28.595447  164707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 11:41:28.595518  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHHostname
	I0203 11:41:28.595578  164707 ssh_runner.go:195] Run: cat /version.json
	I0203 11:41:28.595616  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHHostname
	I0203 11:41:28.598517  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.598801  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.598845  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:28.598864  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.599140  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHPort
	I0203 11:41:28.599280  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:28.599301  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:28.599332  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:28.599463  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHPort
	I0203 11:41:28.599546  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHUsername
	I0203 11:41:28.599617  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHKeyPath
	I0203 11:41:28.599714  164707 main.go:141] libmachine: (bridge-927018) Calling .GetSSHUsername
	I0203 11:41:28.599706  164707 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/bridge-927018/id_rsa Username:docker}
	I0203 11:41:28.599879  164707 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/bridge-927018/id_rsa Username:docker}
	I0203 11:41:28.717604  164707 ssh_runner.go:195] Run: systemctl --version
	I0203 11:41:28.724954  164707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0203 11:41:28.899337  164707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 11:41:28.905158  164707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:41:28.905244  164707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:41:28.921594  164707 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:41:28.921640  164707 start.go:495] detecting cgroup driver to use...
	I0203 11:41:28.921717  164707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:41:28.946403  164707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:41:28.967276  164707 docker.go:217] disabling cri-docker service (if available) ...
	I0203 11:41:28.967336  164707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0203 11:41:28.982305  164707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0203 11:41:28.996461  164707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0203 11:41:29.122123  164707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0203 11:41:29.291050  164707 docker.go:233] disabling docker service ...
	I0203 11:41:29.291123  164707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0203 11:41:29.306566  164707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0203 11:41:29.322595  164707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0203 11:41:29.473243  164707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0203 11:41:29.613406  164707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0203 11:41:29.628457  164707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:41:29.648103  164707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0203 11:41:29.648179  164707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:41:29.658717  164707 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0203 11:41:29.658783  164707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:41:29.668991  164707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:41:29.679787  164707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:41:29.690856  164707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:41:29.702091  164707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:41:29.713649  164707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:41:29.732480  164707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:41:29.742738  164707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:41:29.752005  164707 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:41:29.752076  164707 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:41:29.766338  164707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:41:29.775897  164707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:41:29.901606  164707 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0203 11:41:29.991028  164707 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0203 11:41:29.991105  164707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0203 11:41:29.995576  164707 start.go:563] Will wait 60s for crictl version
	I0203 11:41:29.995642  164707 ssh_runner.go:195] Run: which crictl
	I0203 11:41:29.999445  164707 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:41:30.045357  164707 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0203 11:41:30.045460  164707 ssh_runner.go:195] Run: crio --version
	I0203 11:41:30.073641  164707 ssh_runner.go:195] Run: crio --version
	I0203 11:41:30.105179  164707 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0203 11:41:28.593182  166532 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0203 11:41:28.593400  166532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:41:28.593458  166532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:41:28.613338  166532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45871
	I0203 11:41:28.613844  166532 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:41:28.614480  166532 main.go:141] libmachine: Using API Version  1
	I0203 11:41:28.614503  166532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:41:28.614820  166532 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:41:28.615028  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetMachineName
	I0203 11:41:28.615166  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:41:28.615317  166532 start.go:159] libmachine.API.Create for "old-k8s-version-517711" (driver="kvm2")
	I0203 11:41:28.615340  166532 client.go:168] LocalClient.Create starting
	I0203 11:41:28.615379  166532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem
	I0203 11:41:28.615422  166532 main.go:141] libmachine: Decoding PEM data...
	I0203 11:41:28.615442  166532 main.go:141] libmachine: Parsing certificate...
	I0203 11:41:28.615525  166532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem
	I0203 11:41:28.615559  166532 main.go:141] libmachine: Decoding PEM data...
	I0203 11:41:28.615577  166532 main.go:141] libmachine: Parsing certificate...
	I0203 11:41:28.615603  166532 main.go:141] libmachine: Running pre-create checks...
	I0203 11:41:28.615616  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .PreCreateCheck
	I0203 11:41:28.615929  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetConfigRaw
	I0203 11:41:28.616293  166532 main.go:141] libmachine: Creating machine...
	I0203 11:41:28.616302  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .Create
	I0203 11:41:28.616496  166532 main.go:141] libmachine: (old-k8s-version-517711) creating KVM machine...
	I0203 11:41:28.616520  166532 main.go:141] libmachine: (old-k8s-version-517711) creating network...
	I0203 11:41:28.617909  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found existing default KVM network
	I0203 11:41:28.619432  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:28.619281  166585 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:75:e3:f1} reservation:<nil>}
	I0203 11:41:28.620358  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:28.620277  166585 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:96:e5:75} reservation:<nil>}
	I0203 11:41:28.621571  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:28.621495  166585 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030cc90}
	I0203 11:41:28.621619  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | created network xml: 
	I0203 11:41:28.621639  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | <network>
	I0203 11:41:28.621664  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |   <name>mk-old-k8s-version-517711</name>
	I0203 11:41:28.621674  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |   <dns enable='no'/>
	I0203 11:41:28.621682  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |   
	I0203 11:41:28.621705  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0203 11:41:28.621737  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |     <dhcp>
	I0203 11:41:28.621763  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0203 11:41:28.621816  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |     </dhcp>
	I0203 11:41:28.621849  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |   </ip>
	I0203 11:41:28.621864  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |   
	I0203 11:41:28.621878  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | </network>
	I0203 11:41:28.621888  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | 
	I0203 11:41:28.627066  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | trying to create private KVM network mk-old-k8s-version-517711 192.168.61.0/24...
	I0203 11:41:28.710113  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | private KVM network mk-old-k8s-version-517711 192.168.61.0/24 created
	I0203 11:41:28.710154  166532 main.go:141] libmachine: (old-k8s-version-517711) setting up store path in /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711 ...
	I0203 11:41:28.710167  166532 main.go:141] libmachine: (old-k8s-version-517711) building disk image from file:///home/jenkins/minikube-integration/20354-109432/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0203 11:41:28.710185  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:28.710136  166585 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:41:28.710351  166532 main.go:141] libmachine: (old-k8s-version-517711) Downloading /home/jenkins/minikube-integration/20354-109432/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20354-109432/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0203 11:41:29.015223  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:29.014524  166585 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa...
	I0203 11:41:29.183178  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:29.182990  166585 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/old-k8s-version-517711.rawdisk...
	I0203 11:41:29.183218  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | Writing magic tar header
	I0203 11:41:29.183238  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | Writing SSH key tar header
	I0203 11:41:29.183251  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:29.183219  166585 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711 ...
	I0203 11:41:29.183416  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711
	I0203 11:41:29.183481  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432/.minikube/machines
	I0203 11:41:29.183501  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:41:29.183516  166532 main.go:141] libmachine: (old-k8s-version-517711) setting executable bit set on /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711 (perms=drwx------)
	I0203 11:41:29.183608  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432
	I0203 11:41:29.183676  166532 main.go:141] libmachine: (old-k8s-version-517711) setting executable bit set on /home/jenkins/minikube-integration/20354-109432/.minikube/machines (perms=drwxr-xr-x)
	I0203 11:41:29.183707  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0203 11:41:29.183732  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home/jenkins
	I0203 11:41:29.183758  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home
	I0203 11:41:29.183776  166532 main.go:141] libmachine: (old-k8s-version-517711) setting executable bit set on /home/jenkins/minikube-integration/20354-109432/.minikube (perms=drwxr-xr-x)
	I0203 11:41:29.183798  166532 main.go:141] libmachine: (old-k8s-version-517711) setting executable bit set on /home/jenkins/minikube-integration/20354-109432 (perms=drwxrwxr-x)
	I0203 11:41:29.183807  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | skipping /home - not owner
	I0203 11:41:29.183836  166532 main.go:141] libmachine: (old-k8s-version-517711) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0203 11:41:29.183870  166532 main.go:141] libmachine: (old-k8s-version-517711) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0203 11:41:29.183897  166532 main.go:141] libmachine: (old-k8s-version-517711) creating domain...
	I0203 11:41:29.186489  166532 main.go:141] libmachine: (old-k8s-version-517711) define libvirt domain using xml: 
	I0203 11:41:29.186513  166532 main.go:141] libmachine: (old-k8s-version-517711) <domain type='kvm'>
	I0203 11:41:29.186524  166532 main.go:141] libmachine: (old-k8s-version-517711)   <name>old-k8s-version-517711</name>
	I0203 11:41:29.186531  166532 main.go:141] libmachine: (old-k8s-version-517711)   <memory unit='MiB'>2200</memory>
	I0203 11:41:29.186538  166532 main.go:141] libmachine: (old-k8s-version-517711)   <vcpu>2</vcpu>
	I0203 11:41:29.186545  166532 main.go:141] libmachine: (old-k8s-version-517711)   <features>
	I0203 11:41:29.186556  166532 main.go:141] libmachine: (old-k8s-version-517711)     <acpi/>
	I0203 11:41:29.186566  166532 main.go:141] libmachine: (old-k8s-version-517711)     <apic/>
	I0203 11:41:29.186574  166532 main.go:141] libmachine: (old-k8s-version-517711)     <pae/>
	I0203 11:41:29.186584  166532 main.go:141] libmachine: (old-k8s-version-517711)     
	I0203 11:41:29.186591  166532 main.go:141] libmachine: (old-k8s-version-517711)   </features>
	I0203 11:41:29.186601  166532 main.go:141] libmachine: (old-k8s-version-517711)   <cpu mode='host-passthrough'>
	I0203 11:41:29.186608  166532 main.go:141] libmachine: (old-k8s-version-517711)   
	I0203 11:41:29.186621  166532 main.go:141] libmachine: (old-k8s-version-517711)   </cpu>
	I0203 11:41:29.186629  166532 main.go:141] libmachine: (old-k8s-version-517711)   <os>
	I0203 11:41:29.186635  166532 main.go:141] libmachine: (old-k8s-version-517711)     <type>hvm</type>
	I0203 11:41:29.186643  166532 main.go:141] libmachine: (old-k8s-version-517711)     <boot dev='cdrom'/>
	I0203 11:41:29.186649  166532 main.go:141] libmachine: (old-k8s-version-517711)     <boot dev='hd'/>
	I0203 11:41:29.186657  166532 main.go:141] libmachine: (old-k8s-version-517711)     <bootmenu enable='no'/>
	I0203 11:41:29.186663  166532 main.go:141] libmachine: (old-k8s-version-517711)   </os>
	I0203 11:41:29.186670  166532 main.go:141] libmachine: (old-k8s-version-517711)   <devices>
	I0203 11:41:29.186677  166532 main.go:141] libmachine: (old-k8s-version-517711)     <disk type='file' device='cdrom'>
	I0203 11:41:29.186690  166532 main.go:141] libmachine: (old-k8s-version-517711)       <source file='/home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/boot2docker.iso'/>
	I0203 11:41:29.186703  166532 main.go:141] libmachine: (old-k8s-version-517711)       <target dev='hdc' bus='scsi'/>
	I0203 11:41:29.186710  166532 main.go:141] libmachine: (old-k8s-version-517711)       <readonly/>
	I0203 11:41:29.186723  166532 main.go:141] libmachine: (old-k8s-version-517711)     </disk>
	I0203 11:41:29.186734  166532 main.go:141] libmachine: (old-k8s-version-517711)     <disk type='file' device='disk'>
	I0203 11:41:29.186743  166532 main.go:141] libmachine: (old-k8s-version-517711)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0203 11:41:29.186762  166532 main.go:141] libmachine: (old-k8s-version-517711)       <source file='/home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/old-k8s-version-517711.rawdisk'/>
	I0203 11:41:29.186776  166532 main.go:141] libmachine: (old-k8s-version-517711)       <target dev='hda' bus='virtio'/>
	I0203 11:41:29.186784  166532 main.go:141] libmachine: (old-k8s-version-517711)     </disk>
	I0203 11:41:29.186795  166532 main.go:141] libmachine: (old-k8s-version-517711)     <interface type='network'>
	I0203 11:41:29.186803  166532 main.go:141] libmachine: (old-k8s-version-517711)       <source network='mk-old-k8s-version-517711'/>
	I0203 11:41:29.186810  166532 main.go:141] libmachine: (old-k8s-version-517711)       <model type='virtio'/>
	I0203 11:41:29.186818  166532 main.go:141] libmachine: (old-k8s-version-517711)     </interface>
	I0203 11:41:29.186829  166532 main.go:141] libmachine: (old-k8s-version-517711)     <interface type='network'>
	I0203 11:41:29.186837  166532 main.go:141] libmachine: (old-k8s-version-517711)       <source network='default'/>
	I0203 11:41:29.186845  166532 main.go:141] libmachine: (old-k8s-version-517711)       <model type='virtio'/>
	I0203 11:41:29.186852  166532 main.go:141] libmachine: (old-k8s-version-517711)     </interface>
	I0203 11:41:29.186862  166532 main.go:141] libmachine: (old-k8s-version-517711)     <serial type='pty'>
	I0203 11:41:29.186870  166532 main.go:141] libmachine: (old-k8s-version-517711)       <target port='0'/>
	I0203 11:41:29.186879  166532 main.go:141] libmachine: (old-k8s-version-517711)     </serial>
	I0203 11:41:29.186887  166532 main.go:141] libmachine: (old-k8s-version-517711)     <console type='pty'>
	I0203 11:41:29.186894  166532 main.go:141] libmachine: (old-k8s-version-517711)       <target type='serial' port='0'/>
	I0203 11:41:29.186906  166532 main.go:141] libmachine: (old-k8s-version-517711)     </console>
	I0203 11:41:29.186912  166532 main.go:141] libmachine: (old-k8s-version-517711)     <rng model='virtio'>
	I0203 11:41:29.186924  166532 main.go:141] libmachine: (old-k8s-version-517711)       <backend model='random'>/dev/random</backend>
	I0203 11:41:29.186937  166532 main.go:141] libmachine: (old-k8s-version-517711)     </rng>
	I0203 11:41:29.186944  166532 main.go:141] libmachine: (old-k8s-version-517711)     
	I0203 11:41:29.186954  166532 main.go:141] libmachine: (old-k8s-version-517711)     
	I0203 11:41:29.186961  166532 main.go:141] libmachine: (old-k8s-version-517711)   </devices>
	I0203 11:41:29.186967  166532 main.go:141] libmachine: (old-k8s-version-517711) </domain>
	I0203 11:41:29.186978  166532 main.go:141] libmachine: (old-k8s-version-517711) 
	I0203 11:41:29.192539  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:6c:98:20 in network default
	I0203 11:41:29.193289  166532 main.go:141] libmachine: (old-k8s-version-517711) starting domain...
	I0203 11:41:29.193323  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:29.193332  166532 main.go:141] libmachine: (old-k8s-version-517711) ensuring networks are active...
	I0203 11:41:29.194158  166532 main.go:141] libmachine: (old-k8s-version-517711) Ensuring network default is active
	I0203 11:41:29.194566  166532 main.go:141] libmachine: (old-k8s-version-517711) Ensuring network mk-old-k8s-version-517711 is active
	I0203 11:41:29.195252  166532 main.go:141] libmachine: (old-k8s-version-517711) getting domain XML...
	I0203 11:41:29.196192  166532 main.go:141] libmachine: (old-k8s-version-517711) creating domain...
	I0203 11:41:30.595230  166532 main.go:141] libmachine: (old-k8s-version-517711) waiting for IP...
	I0203 11:41:30.596211  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:30.596769  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:30.596848  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:30.596797  166585 retry.go:31] will retry after 188.638955ms: waiting for domain to come up
	I0203 11:41:30.787921  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:30.788690  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:30.788725  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:30.788648  166585 retry.go:31] will retry after 299.90555ms: waiting for domain to come up
	I0203 11:41:31.090527  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:31.091045  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:31.091072  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:31.090995  166585 retry.go:31] will retry after 395.922052ms: waiting for domain to come up
	I0203 11:41:31.488852  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:31.489818  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:31.489876  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:31.489800  166585 retry.go:31] will retry after 578.898423ms: waiting for domain to come up
	I0203 11:41:29.808115  163102 pod_ready.go:103] pod "coredns-668d6bf9bc-5zwd6" in "kube-system" namespace has status "Ready":"False"
	I0203 11:41:32.307834  163102 pod_ready.go:103] pod "coredns-668d6bf9bc-5zwd6" in "kube-system" namespace has status "Ready":"False"
	I0203 11:41:30.106246  164707 main.go:141] libmachine: (bridge-927018) Calling .GetIP
	I0203 11:41:30.109749  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:30.110245  164707 main.go:141] libmachine: (bridge-927018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:e6", ip: ""} in network mk-bridge-927018: {Iface:virbr3 ExpiryTime:2025-02-03 12:41:17 +0000 UTC Type:0 Mac:52:54:00:58:2e:e6 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:bridge-927018 Clientid:01:52:54:00:58:2e:e6}
	I0203 11:41:30.110271  164707 main.go:141] libmachine: (bridge-927018) DBG | domain bridge-927018 has defined IP address 192.168.39.220 and MAC address 52:54:00:58:2e:e6 in network mk-bridge-927018
	I0203 11:41:30.110512  164707 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0203 11:41:30.115259  164707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:41:30.129880  164707 kubeadm.go:883] updating cluster {Name:bridge-927018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-927018 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 11:41:30.129987  164707 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:41:30.130058  164707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:41:30.172146  164707 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0203 11:41:30.172231  164707 ssh_runner.go:195] Run: which lz4
	I0203 11:41:30.176051  164707 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0203 11:41:30.180052  164707 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 11:41:30.180090  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0203 11:41:31.587012  164707 crio.go:462] duration metric: took 1.410980851s to copy over tarball
	I0203 11:41:31.587097  164707 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0203 11:41:34.071460  164707 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.484330004s)
	I0203 11:41:34.071494  164707 crio.go:469] duration metric: took 2.484445601s to extract the tarball
	I0203 11:41:34.071505  164707 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0203 11:41:34.120799  164707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:41:34.166305  164707 crio.go:514] all images are preloaded for cri-o runtime.
	I0203 11:41:34.166335  164707 cache_images.go:84] Images are preloaded, skipping loading
	I0203 11:41:34.166347  164707 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.32.1 crio true true} ...
	I0203 11:41:34.166482  164707 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-927018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-927018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0203 11:41:34.166572  164707 ssh_runner.go:195] Run: crio config
	I0203 11:41:34.222975  164707 cni.go:84] Creating CNI manager for "bridge"
	I0203 11:41:34.223012  164707 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 11:41:34.223041  164707 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-927018 NodeName:bridge-927018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 11:41:34.223202  164707 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-927018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 11:41:34.223274  164707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 11:41:34.233441  164707 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 11:41:34.233516  164707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 11:41:34.243091  164707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0203 11:41:34.260154  164707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:41:34.277511  164707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0203 11:41:34.296925  164707 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I0203 11:41:34.302193  164707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:41:34.316381  164707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:41:34.457292  164707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:41:34.474460  164707 certs.go:68] Setting up /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018 for IP: 192.168.39.220
	I0203 11:41:34.474484  164707 certs.go:194] generating shared ca certs ...
	I0203 11:41:34.474502  164707 certs.go:226] acquiring lock for ca certs: {Name:mkceafe81f89678b7cbc2a7f6faab4e784fcb207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:41:34.474681  164707 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key
	I0203 11:41:34.474749  164707 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key
	I0203 11:41:34.474764  164707 certs.go:256] generating profile certs ...
	I0203 11:41:34.474835  164707 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.key
	I0203 11:41:34.474853  164707 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt with IP's: []
	I0203 11:41:34.631284  164707 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt ...
	I0203 11:41:34.631313  164707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: {Name:mk805a242a66cd358669f20463bf3adff6ab45a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:41:34.631502  164707 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.key ...
	I0203 11:41:34.631519  164707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.key: {Name:mkec20574c62f55fb3402d221e9754b6d16b43b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:41:34.631621  164707 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/apiserver.key.0f1dc70e
	I0203 11:41:34.631645  164707 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/apiserver.crt.0f1dc70e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220]
	I0203 11:41:34.763675  164707 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/apiserver.crt.0f1dc70e ...
	I0203 11:41:34.763708  164707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/apiserver.crt.0f1dc70e: {Name:mkc65689c99f4108c56681743580f4d759f134b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:41:34.763910  164707 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/apiserver.key.0f1dc70e ...
	I0203 11:41:34.763927  164707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/apiserver.key.0f1dc70e: {Name:mk2d3cc43c22febbf02f7416cdbd428c30f6a4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:41:34.764022  164707 certs.go:381] copying /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/apiserver.crt.0f1dc70e -> /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/apiserver.crt
	I0203 11:41:34.764114  164707 certs.go:385] copying /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/apiserver.key.0f1dc70e -> /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/apiserver.key
	I0203 11:41:34.764189  164707 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/proxy-client.key
	I0203 11:41:34.764210  164707 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/proxy-client.crt with IP's: []
	I0203 11:41:32.070941  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:32.071608  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:32.071631  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:32.071574  166585 retry.go:31] will retry after 706.5192ms: waiting for domain to come up
	I0203 11:41:32.780456  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:32.781091  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:32.781123  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:32.781057  166585 retry.go:31] will retry after 804.047535ms: waiting for domain to come up
	I0203 11:41:33.587298  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:33.587844  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:33.587875  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:33.587807  166585 retry.go:31] will retry after 912.319933ms: waiting for domain to come up
	I0203 11:41:34.501523  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:34.502119  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:34.502147  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:34.502104  166585 retry.go:31] will retry after 1.13391392s: waiting for domain to come up
	I0203 11:41:35.637314  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:35.637847  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:35.637912  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:35.637837  166585 retry.go:31] will retry after 1.13199998s: waiting for domain to come up
	I0203 11:41:36.771306  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:36.771759  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:36.771814  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:36.771739  166585 retry.go:31] will retry after 1.632808893s: waiting for domain to come up
	I0203 11:41:34.807201  163102 pod_ready.go:103] pod "coredns-668d6bf9bc-5zwd6" in "kube-system" namespace has status "Ready":"False"
	I0203 11:41:36.809868  163102 pod_ready.go:103] pod "coredns-668d6bf9bc-5zwd6" in "kube-system" namespace has status "Ready":"False"
	I0203 11:41:37.291394  163161 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 3664cb0932b721d73093cee5a14b929e879e5c8c7b5d897b69ed7aaee0ee8c0a 6a37e803007efad8d1d6e38be9cbb2bab41cbceb9f2490f22c86352792f9717b 54106bb8937b2e34e37f0addcf341d653d6d27608ea1d43a8919f8aec641a6e8 a50f96f589f4dbb7f968833da6ac37717c959bb0c677dea772d29d0a5d04011f 1dbe0f72fed8eba5e5916c12793caacaf4fb99bd3483c931b4c5af3417be047b 6e77583e81299dc6729e8b7be42b86b55ac55f7f36533564939608af9ecda56e 0a22dc454b6fcb59c97268982bd197672106f35e8cd1c4064fa455c5712d1133 760cff0b91b3c74039a0334d5455b8f1477e9810b9e93f1c44b432382f50106c fd27fd1fd140a48bac5f258b94374adf6a8d9e521da53fac257df3cb7e8b1ca1 d807b4b23211d5f8b2789529910feeb3322becd452cb07b493eda47aeb828925 7285783818c4a80412e9fa09078bd55760ac7749a9332e1fc7723ca3eb23aaef 8463f7d2ecab13df6028521883c72eb9b7480b48f52fe805b90df29b608c4485 ee4bc2b0e7bdb403ebaf2932a1dec44d7520ff23c7276e93a15f92baa43a8064: (25.633158546s)
	W0203 11:41:37.291497  163161 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 3664cb0932b721d73093cee5a14b929e879e5c8c7b5d897b69ed7aaee0ee8c0a 6a37e803007efad8d1d6e38be9cbb2bab41cbceb9f2490f22c86352792f9717b 54106bb8937b2e34e37f0addcf341d653d6d27608ea1d43a8919f8aec641a6e8 a50f96f589f4dbb7f968833da6ac37717c959bb0c677dea772d29d0a5d04011f 1dbe0f72fed8eba5e5916c12793caacaf4fb99bd3483c931b4c5af3417be047b 6e77583e81299dc6729e8b7be42b86b55ac55f7f36533564939608af9ecda56e 0a22dc454b6fcb59c97268982bd197672106f35e8cd1c4064fa455c5712d1133 760cff0b91b3c74039a0334d5455b8f1477e9810b9e93f1c44b432382f50106c fd27fd1fd140a48bac5f258b94374adf6a8d9e521da53fac257df3cb7e8b1ca1 d807b4b23211d5f8b2789529910feeb3322becd452cb07b493eda47aeb828925 7285783818c4a80412e9fa09078bd55760ac7749a9332e1fc7723ca3eb23aaef 8463f7d2ecab13df6028521883c72eb9b7480b48f52fe805b90df29b608c4485 ee4bc2b0e7bdb403ebaf2932a1dec44d7520ff23c7276e93a15f92baa43a8064: Proce
ss exited with status 1
	stdout:
	3664cb0932b721d73093cee5a14b929e879e5c8c7b5d897b69ed7aaee0ee8c0a
	6a37e803007efad8d1d6e38be9cbb2bab41cbceb9f2490f22c86352792f9717b
	54106bb8937b2e34e37f0addcf341d653d6d27608ea1d43a8919f8aec641a6e8
	a50f96f589f4dbb7f968833da6ac37717c959bb0c677dea772d29d0a5d04011f
	1dbe0f72fed8eba5e5916c12793caacaf4fb99bd3483c931b4c5af3417be047b
	6e77583e81299dc6729e8b7be42b86b55ac55f7f36533564939608af9ecda56e
	
	stderr:
	E0203 11:41:37.244446    3251 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a22dc454b6fcb59c97268982bd197672106f35e8cd1c4064fa455c5712d1133\": container with ID starting with 0a22dc454b6fcb59c97268982bd197672106f35e8cd1c4064fa455c5712d1133 not found: ID does not exist" containerID="0a22dc454b6fcb59c97268982bd197672106f35e8cd1c4064fa455c5712d1133"
	time="2025-02-03T11:41:37Z" level=fatal msg="stopping the container \"0a22dc454b6fcb59c97268982bd197672106f35e8cd1c4064fa455c5712d1133\": rpc error: code = NotFound desc = could not find container \"0a22dc454b6fcb59c97268982bd197672106f35e8cd1c4064fa455c5712d1133\": container with ID starting with 0a22dc454b6fcb59c97268982bd197672106f35e8cd1c4064fa455c5712d1133 not found: ID does not exist"
	I0203 11:41:37.291582  163161 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 11:41:37.349297  163161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:41:37.364098  163161 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Feb  3 11:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Feb  3 11:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 Feb  3 11:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Feb  3 11:40 /etc/kubernetes/scheduler.conf
	
	I0203 11:41:37.364202  163161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:41:37.376656  163161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:41:37.387656  163161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:41:37.400969  163161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0203 11:41:37.401041  163161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:41:37.413331  163161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:41:37.425002  163161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0203 11:41:37.425090  163161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:41:37.435625  163161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:41:37.446331  163161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:41:37.507468  163161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:41:38.461655  163161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:41:34.811977  164707 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/proxy-client.crt ...
	I0203 11:41:34.812005  164707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/proxy-client.crt: {Name:mka64bd914f3c6b85a1875f7f0f2baf47898e26f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:41:34.812191  164707 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/proxy-client.key ...
	I0203 11:41:34.812215  164707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/proxy-client.key: {Name:mk358be06008474b228e4fd5da62948b00fd1012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:41:34.812472  164707 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem (1338 bytes)
	W0203 11:41:34.812516  164707 certs.go:480] ignoring /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606_empty.pem, impossibly tiny 0 bytes
	I0203 11:41:34.812531  164707 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 11:41:34.812562  164707 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem (1078 bytes)
	I0203 11:41:34.812587  164707 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem (1123 bytes)
	I0203 11:41:34.812629  164707 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem (1679 bytes)
	I0203 11:41:34.812686  164707 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:41:34.813319  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:41:34.838816  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 11:41:34.865328  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:41:34.890116  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 11:41:34.912293  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0203 11:41:34.936338  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 11:41:34.959585  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:41:34.986815  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 11:41:35.010024  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /usr/share/ca-certificates/1166062.pem (1708 bytes)
	I0203 11:41:35.034628  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:41:35.059629  164707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem --> /usr/share/ca-certificates/116606.pem (1338 bytes)
	I0203 11:41:35.085344  164707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:41:35.102386  164707 ssh_runner.go:195] Run: openssl version
	I0203 11:41:35.108398  164707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:41:35.119547  164707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:41:35.124121  164707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:41:35.124181  164707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:41:35.129984  164707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:41:35.146180  164707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116606.pem && ln -fs /usr/share/ca-certificates/116606.pem /etc/ssl/certs/116606.pem"
	I0203 11:41:35.170967  164707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116606.pem
	I0203 11:41:35.180195  164707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:41 /usr/share/ca-certificates/116606.pem
	I0203 11:41:35.180266  164707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116606.pem
	I0203 11:41:35.187940  164707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116606.pem /etc/ssl/certs/51391683.0"
	I0203 11:41:35.203214  164707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1166062.pem && ln -fs /usr/share/ca-certificates/1166062.pem /etc/ssl/certs/1166062.pem"
	I0203 11:41:35.213602  164707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1166062.pem
	I0203 11:41:35.218125  164707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:41 /usr/share/ca-certificates/1166062.pem
	I0203 11:41:35.218185  164707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1166062.pem
	I0203 11:41:35.223902  164707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1166062.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:41:35.234601  164707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:41:35.239499  164707 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 11:41:35.239552  164707 kubeadm.go:392] StartCluster: {Name:bridge-927018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-927018 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:41:35.239636  164707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 11:41:35.239685  164707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:41:35.282176  164707 cri.go:89] found id: ""
	I0203 11:41:35.282260  164707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 11:41:35.292528  164707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:41:35.302703  164707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:41:35.314326  164707 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:41:35.314349  164707 kubeadm.go:157] found existing configuration files:
	
	I0203 11:41:35.314399  164707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:41:35.324894  164707 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:41:35.324956  164707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:41:35.335911  164707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:41:35.345882  164707 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:41:35.345952  164707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:41:35.356867  164707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:41:35.366125  164707 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:41:35.366190  164707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:41:35.376137  164707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:41:35.385419  164707 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:41:35.385493  164707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:41:35.394632  164707 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:41:35.564194  164707 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:41:39.311120  163102 pod_ready.go:103] pod "coredns-668d6bf9bc-5zwd6" in "kube-system" namespace has status "Ready":"False"
	I0203 11:41:40.311775  163102 pod_ready.go:93] pod "coredns-668d6bf9bc-5zwd6" in "kube-system" namespace has status "Ready":"True"
	I0203 11:41:40.311876  163102 pod_ready.go:82] duration metric: took 14.511481365s for pod "coredns-668d6bf9bc-5zwd6" in "kube-system" namespace to be "Ready" ...
	I0203 11:41:40.311900  163102 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-927018" in "kube-system" namespace to be "Ready" ...
	I0203 11:41:40.320858  163102 pod_ready.go:93] pod "etcd-flannel-927018" in "kube-system" namespace has status "Ready":"True"
	I0203 11:41:40.320940  163102 pod_ready.go:82] duration metric: took 9.021746ms for pod "etcd-flannel-927018" in "kube-system" namespace to be "Ready" ...
	I0203 11:41:40.320964  163102 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-927018" in "kube-system" namespace to be "Ready" ...
	I0203 11:41:40.331177  163102 pod_ready.go:93] pod "kube-apiserver-flannel-927018" in "kube-system" namespace has status "Ready":"True"
	I0203 11:41:40.331253  163102 pod_ready.go:82] duration metric: took 10.271809ms for pod "kube-apiserver-flannel-927018" in "kube-system" namespace to be "Ready" ...
	I0203 11:41:40.331276  163102 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-927018" in "kube-system" namespace to be "Ready" ...
	I0203 11:41:40.340793  163102 pod_ready.go:93] pod "kube-controller-manager-flannel-927018" in "kube-system" namespace has status "Ready":"True"
	I0203 11:41:40.340874  163102 pod_ready.go:82] duration metric: took 9.579946ms for pod "kube-controller-manager-flannel-927018" in "kube-system" namespace to be "Ready" ...
	I0203 11:41:40.340898  163102 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-bn7s5" in "kube-system" namespace to be "Ready" ...
	I0203 11:41:40.348066  163102 pod_ready.go:93] pod "kube-proxy-bn7s5" in "kube-system" namespace has status "Ready":"True"
	I0203 11:41:40.348147  163102 pod_ready.go:82] duration metric: took 7.231873ms for pod "kube-proxy-bn7s5" in "kube-system" namespace to be "Ready" ...
	I0203 11:41:40.348180  163102 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-927018" in "kube-system" namespace to be "Ready" ...
	I0203 11:41:40.705988  163102 pod_ready.go:93] pod "kube-scheduler-flannel-927018" in "kube-system" namespace has status "Ready":"True"
	I0203 11:41:40.706036  163102 pod_ready.go:82] duration metric: took 357.83697ms for pod "kube-scheduler-flannel-927018" in "kube-system" namespace to be "Ready" ...
	I0203 11:41:40.706055  163102 pod_ready.go:39] duration metric: took 14.916966833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:41:40.706086  163102 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:41:40.706147  163102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:41:40.734250  163102 api_server.go:72] duration metric: took 26.288082037s to wait for apiserver process to appear ...
	I0203 11:41:40.734286  163102 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:41:40.734309  163102 api_server.go:253] Checking apiserver healthz at https://192.168.72.80:8443/healthz ...
	I0203 11:41:40.741481  163102 api_server.go:279] https://192.168.72.80:8443/healthz returned 200:
	ok
	I0203 11:41:40.742741  163102 api_server.go:141] control plane version: v1.32.1
	I0203 11:41:40.742770  163102 api_server.go:131] duration metric: took 8.476029ms to wait for apiserver health ...
	I0203 11:41:40.742781  163102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:41:40.911657  163102 system_pods.go:59] 7 kube-system pods found
	I0203 11:41:40.911705  163102 system_pods.go:61] "coredns-668d6bf9bc-5zwd6" [eb2f6841-fbf2-49d0-97a5-9a710b3d7791] Running
	I0203 11:41:40.911715  163102 system_pods.go:61] "etcd-flannel-927018" [6de41de7-d9ae-42f7-9511-6a93491fa230] Running
	I0203 11:41:40.911721  163102 system_pods.go:61] "kube-apiserver-flannel-927018" [ab8bd495-3df0-4adc-8fd7-d319a368527b] Running
	I0203 11:41:40.911726  163102 system_pods.go:61] "kube-controller-manager-flannel-927018" [e3441d94-4453-4005-b4d2-36e268923a13] Running
	I0203 11:41:40.911731  163102 system_pods.go:61] "kube-proxy-bn7s5" [69a072aa-3bec-4553-bd26-267a2c96afb0] Running
	I0203 11:41:40.911736  163102 system_pods.go:61] "kube-scheduler-flannel-927018" [deb5a625-0d8f-4c13-b67b-d395eb5ab61b] Running
	I0203 11:41:40.911741  163102 system_pods.go:61] "storage-provisioner" [2568c112-b220-4fb9-ab42-21cdf920cfb5] Running
	I0203 11:41:40.911747  163102 system_pods.go:74] duration metric: took 168.960411ms to wait for pod list to return data ...
	I0203 11:41:40.911758  163102 default_sa.go:34] waiting for default service account to be created ...
	I0203 11:41:41.106574  163102 default_sa.go:45] found service account: "default"
	I0203 11:41:41.106613  163102 default_sa.go:55] duration metric: took 194.847707ms for default service account to be created ...
	I0203 11:41:41.106625  163102 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 11:41:41.309027  163102 system_pods.go:86] 7 kube-system pods found
	I0203 11:41:41.309065  163102 system_pods.go:89] "coredns-668d6bf9bc-5zwd6" [eb2f6841-fbf2-49d0-97a5-9a710b3d7791] Running
	I0203 11:41:41.309074  163102 system_pods.go:89] "etcd-flannel-927018" [6de41de7-d9ae-42f7-9511-6a93491fa230] Running
	I0203 11:41:41.309081  163102 system_pods.go:89] "kube-apiserver-flannel-927018" [ab8bd495-3df0-4adc-8fd7-d319a368527b] Running
	I0203 11:41:41.309087  163102 system_pods.go:89] "kube-controller-manager-flannel-927018" [e3441d94-4453-4005-b4d2-36e268923a13] Running
	I0203 11:41:41.309094  163102 system_pods.go:89] "kube-proxy-bn7s5" [69a072aa-3bec-4553-bd26-267a2c96afb0] Running
	I0203 11:41:41.309099  163102 system_pods.go:89] "kube-scheduler-flannel-927018" [deb5a625-0d8f-4c13-b67b-d395eb5ab61b] Running
	I0203 11:41:41.309104  163102 system_pods.go:89] "storage-provisioner" [2568c112-b220-4fb9-ab42-21cdf920cfb5] Running
	I0203 11:41:41.309119  163102 system_pods.go:126] duration metric: took 202.487435ms to wait for k8s-apps to be running ...
	I0203 11:41:41.309129  163102 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 11:41:41.309190  163102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:41:41.329785  163102 system_svc.go:56] duration metric: took 20.641921ms WaitForService to wait for kubelet
	I0203 11:41:41.329902  163102 kubeadm.go:582] duration metric: took 26.883741313s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:41:41.329940  163102 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:41:41.505495  163102 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:41:41.505537  163102 node_conditions.go:123] node cpu capacity is 2
	I0203 11:41:41.505569  163102 node_conditions.go:105] duration metric: took 175.582104ms to run NodePressure ...
	I0203 11:41:41.505586  163102 start.go:241] waiting for startup goroutines ...
	I0203 11:41:41.505596  163102 start.go:246] waiting for cluster config update ...
	I0203 11:41:41.505616  163102 start.go:255] writing updated cluster config ...
	I0203 11:41:41.505951  163102 ssh_runner.go:195] Run: rm -f paused
	I0203 11:41:41.583315  163102 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0203 11:41:41.585301  163102 out.go:177] * Done! kubectl is now configured to use "flannel-927018" cluster and "default" namespace by default
	I0203 11:41:38.405840  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:38.406394  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:38.406424  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:38.406354  166585 retry.go:31] will retry after 2.223756189s: waiting for domain to come up
	I0203 11:41:40.632375  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:40.633012  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:40.633048  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:40.632945  166585 retry.go:31] will retry after 2.448781389s: waiting for domain to come up
	I0203 11:41:38.735961  163161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:41:38.825980  163161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:41:38.954135  163161 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:41:38.954236  163161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:41:39.455208  163161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:41:39.954837  163161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:41:39.993650  163161 api_server.go:72] duration metric: took 1.039513704s to wait for apiserver process to appear ...
	I0203 11:41:39.993686  163161 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:41:39.993711  163161 api_server.go:253] Checking apiserver healthz at https://192.168.50.247:8443/healthz ...
	I0203 11:41:39.994264  163161 api_server.go:269] stopped: https://192.168.50.247:8443/healthz: Get "https://192.168.50.247:8443/healthz": dial tcp 192.168.50.247:8443: connect: connection refused
	I0203 11:41:40.493885  163161 api_server.go:253] Checking apiserver healthz at https://192.168.50.247:8443/healthz ...
	I0203 11:41:42.513024  163161 api_server.go:279] https://192.168.50.247:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 11:41:42.513055  163161 api_server.go:103] status: https://192.168.50.247:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 11:41:42.513069  163161 api_server.go:253] Checking apiserver healthz at https://192.168.50.247:8443/healthz ...
	I0203 11:41:42.525730  163161 api_server.go:279] https://192.168.50.247:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 11:41:42.525763  163161 api_server.go:103] status: https://192.168.50.247:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 11:41:42.993956  163161 api_server.go:253] Checking apiserver healthz at https://192.168.50.247:8443/healthz ...
	I0203 11:41:43.000110  163161 api_server.go:279] https://192.168.50.247:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 11:41:43.000147  163161 api_server.go:103] status: https://192.168.50.247:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 11:41:43.493771  163161 api_server.go:253] Checking apiserver healthz at https://192.168.50.247:8443/healthz ...
	I0203 11:41:43.499316  163161 api_server.go:279] https://192.168.50.247:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 11:41:43.499340  163161 api_server.go:103] status: https://192.168.50.247:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 11:41:43.994630  163161 api_server.go:253] Checking apiserver healthz at https://192.168.50.247:8443/healthz ...
	I0203 11:41:43.999431  163161 api_server.go:279] https://192.168.50.247:8443/healthz returned 200:
	ok
	I0203 11:41:44.006642  163161 api_server.go:141] control plane version: v1.32.1
	I0203 11:41:44.006676  163161 api_server.go:131] duration metric: took 4.012981603s to wait for apiserver health ...
	I0203 11:41:44.006689  163161 cni.go:84] Creating CNI manager for ""
	I0203 11:41:44.006699  163161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:41:44.007820  163161 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0203 11:41:44.008868  163161 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 11:41:44.019857  163161 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0203 11:41:44.037704  163161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:41:44.037810  163161 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0203 11:41:44.037843  163161 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0203 11:41:44.047733  163161 system_pods.go:59] 8 kube-system pods found
	I0203 11:41:44.047764  163161 system_pods.go:61] "coredns-668d6bf9bc-7zrrj" [c82a0d7c-c194-4892-800c-ff682a08a3ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0203 11:41:44.047771  163161 system_pods.go:61] "coredns-668d6bf9bc-jpw27" [483be4ea-682d-4830-abfb-28ad521fa94f] Running
	I0203 11:41:44.047777  163161 system_pods.go:61] "etcd-kubernetes-upgrade-700839" [15d71af6-e2d8-4c83-bfc8-5065bd7d4855] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 11:41:44.047784  163161 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-700839" [7f4836c5-6c72-4e74-9009-3ec54a7301e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 11:41:44.047793  163161 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-700839" [3e295826-1f37-4cd8-be47-3625405039c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 11:41:44.047798  163161 system_pods.go:61] "kube-proxy-k59wk" [8003ccf3-8f64-4745-b39a-6c4493831e88] Running
	I0203 11:41:44.047802  163161 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-700839" [5415c3ac-628b-4af7-806a-0503b15803f2] Running
	I0203 11:41:44.047805  163161 system_pods.go:61] "storage-provisioner" [99e69acd-910d-4bb4-a6a6-62604d394f5f] Running
	I0203 11:41:44.047811  163161 system_pods.go:74] duration metric: took 10.082927ms to wait for pod list to return data ...
	I0203 11:41:44.047819  163161 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:41:44.051351  163161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:41:44.051383  163161 node_conditions.go:123] node cpu capacity is 2
	I0203 11:41:44.051396  163161 node_conditions.go:105] duration metric: took 3.571818ms to run NodePressure ...
	I0203 11:41:44.051417  163161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:41:44.331801  163161 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 11:41:44.343818  163161 ops.go:34] apiserver oom_adj: -16
	I0203 11:41:44.343842  163161 kubeadm.go:597] duration metric: took 32.852966085s to restartPrimaryControlPlane
	I0203 11:41:44.343853  163161 kubeadm.go:394] duration metric: took 33.014394845s to StartCluster
	I0203 11:41:44.343873  163161 settings.go:142] acquiring lock: {Name:mk7f08542cc4ae303b222901a9d369cc0753d51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:41:44.343965  163161 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:41:44.344868  163161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/kubeconfig: {Name:mkcb7c4c45c6b828504faaa9fea59b0bb0855286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:41:44.345150  163161 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.247 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 11:41:44.345227  163161 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0203 11:41:44.345345  163161 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-700839"
	I0203 11:41:44.345371  163161 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-700839"
	I0203 11:41:44.345371  163161 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-700839"
	I0203 11:41:44.345403  163161 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-700839"
	I0203 11:41:44.345414  163161 config.go:182] Loaded profile config "kubernetes-upgrade-700839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	W0203 11:41:44.345383  163161 addons.go:247] addon storage-provisioner should already be in state true
	I0203 11:41:44.345508  163161 host.go:66] Checking if "kubernetes-upgrade-700839" exists ...
	I0203 11:41:44.345844  163161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:41:44.345880  163161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:41:44.345927  163161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:41:44.345964  163161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:41:44.347457  163161 out.go:177] * Verifying Kubernetes components...
	I0203 11:41:44.348817  163161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:41:44.363002  163161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46347
	I0203 11:41:44.363769  163161 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:41:44.364503  163161 main.go:141] libmachine: Using API Version  1
	I0203 11:41:44.364529  163161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:41:44.365031  163161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45233
	I0203 11:41:44.365331  163161 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:41:44.365411  163161 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:41:44.365828  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetState
	I0203 11:41:44.365929  163161 main.go:141] libmachine: Using API Version  1
	I0203 11:41:44.365947  163161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:41:44.366389  163161 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:41:44.367143  163161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:41:44.367206  163161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:41:44.368819  163161 kapi.go:59] client config for kubernetes-upgrade-700839: &rest.Config{Host:"https://192.168.50.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/client.crt", KeyFile:"/home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kubernetes-upgrade-700839/client.key", CAFile:"/home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 11:41:44.369079  163161 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-700839"
	W0203 11:41:44.369094  163161 addons.go:247] addon default-storageclass should already be in state true
	I0203 11:41:44.369118  163161 host.go:66] Checking if "kubernetes-upgrade-700839" exists ...
	I0203 11:41:44.369393  163161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:41:44.369435  163161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:41:44.383786  163161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43987
	I0203 11:41:44.384369  163161 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:41:44.384871  163161 main.go:141] libmachine: Using API Version  1
	I0203 11:41:44.384884  163161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:41:44.385294  163161 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:41:44.385469  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetState
	I0203 11:41:44.387364  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .DriverName
	I0203 11:41:44.388229  163161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46637
	I0203 11:41:44.388785  163161 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:41:44.388891  163161 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:41:44.389276  163161 main.go:141] libmachine: Using API Version  1
	I0203 11:41:44.389297  163161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:41:44.389630  163161 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:41:44.389864  163161 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:41:44.389883  163161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 11:41:44.389902  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:41:44.390146  163161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:41:44.390204  163161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:41:44.393207  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:41:44.393645  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:39:50 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:41:44.393683  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:41:44.393939  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHPort
	I0203 11:41:44.394162  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:41:44.394336  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHUsername
	I0203 11:41:44.394545  163161 sshutil.go:53] new ssh client: &{IP:192.168.50.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839/id_rsa Username:docker}
	I0203 11:41:44.411774  163161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I0203 11:41:44.412409  163161 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:41:44.412995  163161 main.go:141] libmachine: Using API Version  1
	I0203 11:41:44.413016  163161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:41:44.413482  163161 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:41:44.413707  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetState
	I0203 11:41:44.415900  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .DriverName
	I0203 11:41:44.416123  163161 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 11:41:44.416141  163161 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 11:41:44.416170  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHHostname
	I0203 11:41:44.419924  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:41:44.420404  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:50", ip: ""} in network mk-kubernetes-upgrade-700839: {Iface:virbr2 ExpiryTime:2025-02-03 12:39:50 +0000 UTC Type:0 Mac:52:54:00:e8:3d:50 Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:kubernetes-upgrade-700839 Clientid:01:52:54:00:e8:3d:50}
	I0203 11:41:44.420426  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | domain kubernetes-upgrade-700839 has defined IP address 192.168.50.247 and MAC address 52:54:00:e8:3d:50 in network mk-kubernetes-upgrade-700839
	I0203 11:41:44.420741  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHPort
	I0203 11:41:44.421151  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHKeyPath
	I0203 11:41:44.421347  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .GetSSHUsername
	I0203 11:41:44.421465  163161 sshutil.go:53] new ssh client: &{IP:192.168.50.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/kubernetes-upgrade-700839/id_rsa Username:docker}
	I0203 11:41:44.560226  163161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:41:44.579106  163161 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:41:44.579198  163161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:41:44.595062  163161 api_server.go:72] duration metric: took 249.868223ms to wait for apiserver process to appear ...
	I0203 11:41:44.595097  163161 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:41:44.595125  163161 api_server.go:253] Checking apiserver healthz at https://192.168.50.247:8443/healthz ...
	I0203 11:41:44.603584  163161 api_server.go:279] https://192.168.50.247:8443/healthz returned 200:
	ok
	I0203 11:41:44.605364  163161 api_server.go:141] control plane version: v1.32.1
	I0203 11:41:44.605390  163161 api_server.go:131] duration metric: took 10.284558ms to wait for apiserver health ...
	I0203 11:41:44.605402  163161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:41:44.614062  163161 system_pods.go:59] 8 kube-system pods found
	I0203 11:41:44.614090  163161 system_pods.go:61] "coredns-668d6bf9bc-7zrrj" [c82a0d7c-c194-4892-800c-ff682a08a3ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0203 11:41:44.614099  163161 system_pods.go:61] "coredns-668d6bf9bc-jpw27" [483be4ea-682d-4830-abfb-28ad521fa94f] Running
	I0203 11:41:44.614105  163161 system_pods.go:61] "etcd-kubernetes-upgrade-700839" [15d71af6-e2d8-4c83-bfc8-5065bd7d4855] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 11:41:44.614112  163161 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-700839" [7f4836c5-6c72-4e74-9009-3ec54a7301e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 11:41:44.614119  163161 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-700839" [3e295826-1f37-4cd8-be47-3625405039c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 11:41:44.614123  163161 system_pods.go:61] "kube-proxy-k59wk" [8003ccf3-8f64-4745-b39a-6c4493831e88] Running
	I0203 11:41:44.614127  163161 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-700839" [5415c3ac-628b-4af7-806a-0503b15803f2] Running
	I0203 11:41:44.614141  163161 system_pods.go:61] "storage-provisioner" [99e69acd-910d-4bb4-a6a6-62604d394f5f] Running
	I0203 11:41:44.614149  163161 system_pods.go:74] duration metric: took 8.741593ms to wait for pod list to return data ...
	I0203 11:41:44.614163  163161 kubeadm.go:582] duration metric: took 268.978059ms to wait for: map[apiserver:true system_pods:true]
	I0203 11:41:44.614176  163161 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:41:44.616368  163161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:41:44.616392  163161 node_conditions.go:123] node cpu capacity is 2
	I0203 11:41:44.616404  163161 node_conditions.go:105] duration metric: took 2.222642ms to run NodePressure ...
	I0203 11:41:44.616420  163161 start.go:241] waiting for startup goroutines ...
	I0203 11:41:44.692600  163161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:41:44.708270  163161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 11:41:45.472547  163161 main.go:141] libmachine: Making call to close driver server
	I0203 11:41:45.472584  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .Close
	I0203 11:41:45.472556  163161 main.go:141] libmachine: Making call to close driver server
	I0203 11:41:45.472694  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .Close
	I0203 11:41:45.472993  163161 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:41:45.473019  163161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:41:45.473027  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | Closing plugin on server side
	I0203 11:41:45.473039  163161 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:41:45.473053  163161 main.go:141] libmachine: Making call to close driver server
	I0203 11:41:45.473056  163161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:41:45.473062  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .Close
	I0203 11:41:45.473071  163161 main.go:141] libmachine: Making call to close driver server
	I0203 11:41:45.473082  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .Close
	I0203 11:41:45.473366  163161 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:41:45.473385  163161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:41:45.473420  163161 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:41:45.473434  163161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:41:45.473418  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) DBG | Closing plugin on server side
	I0203 11:41:45.481957  163161 main.go:141] libmachine: Making call to close driver server
	I0203 11:41:45.481979  163161 main.go:141] libmachine: (kubernetes-upgrade-700839) Calling .Close
	I0203 11:41:45.482272  163161 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:41:45.482312  163161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:41:45.484964  163161 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0203 11:41:45.486211  163161 addons.go:514] duration metric: took 1.140999118s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0203 11:41:45.486245  163161 start.go:246] waiting for cluster config update ...
	I0203 11:41:45.486256  163161 start.go:255] writing updated cluster config ...
	I0203 11:41:45.486483  163161 ssh_runner.go:195] Run: rm -f paused
	I0203 11:41:45.550374  163161 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0203 11:41:45.552439  163161 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-700839" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.328243534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738582906328085792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a809a801-a321-40c9-a397-76a0a8e89b7f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.328806348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55fa319a-3fa0-47cc-a911-4892f27f7d48 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.328896524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55fa319a-3fa0-47cc-a911-4892f27f7d48 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.329258572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7dec8234ece4f96a7f226bf5232bf06f1494c43c439e3c04cb292ae1a70d0f45,PodSandboxId:19cb97488df9f4890676f3b8d28db36bf0d0be1670c2d0912e7db133c139cf6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738582903135415972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7zrrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a0d7c-c194-4892-800c-ff682a08a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8b076dccdd703dc75b2f508ff62a59e6fd3d7235c1e4ef9b3c6b825ff7a69e,PodSandboxId:580e35509b427b22b49df21fc93415c493df1f8867d4e36a04dd440029d6b5d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738582899600903928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upg
rade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29f0377fa5ee4e99a2715c1a9031403,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d594d6f504a381492020cd1b109439ff8089cfbf922174736d17fcf41cef8fb1,PodSandboxId:df74fac204428ed18bc7f74678b8b71e9abaf84dd6307a0da0762d8eafe04fe3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738582899570455313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-7
00839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00d882a7f4a506824d33e42b702ae7b3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4af866feccf00fa25f8ee548b0f4ed180105a3e3753a9b9db721397ece090ee,PodSandboxId:7ba6e2ae438d0e76c66e90cffba140d1304e5f2e40d8381705676b99f2fbc172,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738582899551656961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-700839,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a1c7eab15160c343857fad2cf8fa4d83,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0efd4f531524aeaa677d6ce20f1b5c79ac5a0493f0e73cd91b6cc1ea66cc161b,PodSandboxId:ec27f386367fa185b60f67c9ed7e135ee9a5af6e1854cba73a2fd2d7083a01f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738582893987132163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jpw27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
483be4ea-682d-4830-abfb-28ad521fa94f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd674916a6274b846001986cec46f618181c9bb4331cd9fdad4e2ab2ba0f02f6,PodSandboxId:da4c72f041cc5f0cb9ef8f1b22bf298adb3b4d89c4b9907cf9f7db34c8ac66f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1738582876077384329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e69acd-910d-4bb4-a6a6-62604d394f5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b941eb5fe4faa7e91c6aa7759d9b24f4546e6a80ae5f9e5d260c1b6bb2c5702,PodSandboxId:b6d2f2d2884133a0f9af6b5aeaeaf9f7f3fd901c047ca75a366190d6a3b437fd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738582874
079985988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k59wk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8003ccf3-8f64-4745-b39a-6c4493831e88,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79d2665073ef9a1a0b4efd86c4f5d5ee2bb52908ddd3359187baaca5508ed38e,PodSandboxId:a14b1f6f91d349de592f686ccba0956fc6892737439dd71b321ea9dab5691301,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738582871788518440,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106756c1a9a055e82a094e8ac81f43f3,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3664cb0932b721d73093cee5a14b929e879e5c8c7b5d897b69ed7aaee0ee8c0a,PodSandboxId:ec27f386367fa185b60f67c9ed7e135ee9a5af6e1854cba73a2fd2d7083a01f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738582870730288846,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jpw27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483be4ea-682d-4830-abfb-28ad521fa94f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a37e803007efad8d1d6e38be9cbb2bab41cbceb9f2490f22c86352792f9717b,PodSandboxId:19cb97488df9f4890676f3b8d28db36bf0d0be1670c2d0912e7db133c139cf6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738582870503850442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7zrrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a0d7c-c194-4892-800c-ff682a08a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50f96f589f4dbb7f968833da6ac37717c959bb0c677dea772d29d0a5d04011f,PodSandboxId:7ba6e2ae438d0e76c66e90cffba140d1304e5f2e40d8381
705676b99f2fbc172,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738582869719707697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7eab15160c343857fad2cf8fa4d83,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54106bb8937b2e34e37f0addcf341d653d6d27608ea1d43a8919f8aec641a6e8,PodSandboxId:df74fac204428ed18bc7f74678b8b71e9abaf84dd6307a0da0762d8eafe04fe3,Metadata:&Containe
rMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738582869730616394,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00d882a7f4a506824d33e42b702ae7b3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbe0f72fed8eba5e5916c12793caacaf4fb99bd3483c931b4c5af3417be047b,PodSandboxId:580e35509b427b22b49df21fc93415c493df1f8867d4e36a04dd440029d6b5d2,Metadata:&ContainerMetad
ata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738582869661072255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29f0377fa5ee4e99a2715c1a9031403,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e77583e81299dc6729e8b7be42b86b55ac55f7f36533564939608af9ecda56e,PodSandboxId:6b51b6c914fd0622e4bcb593ba6b30ed5892f57aceb9504ff045cc285ea84d3d,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738582824111323475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e69acd-910d-4bb4-a6a6-62604d394f5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd27fd1fd140a48bac5f258b94374adf6a8d9e521da53fac257df3cb7e8b1ca1,PodSandboxId:4594db9a31b80aca3715b20e3d219391acdcebce68eb82fcc23de313f4060931,Metadata:&Contain
erMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738582822763272269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k59wk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8003ccf3-8f64-4745-b39a-6c4493831e88,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8463f7d2ecab13df6028521883c72eb9b7480b48f52fe805b90df29b608c4485,PodSandboxId:06879ea49ef2b2c7f6767d19f6b319196d280215a6e4b6445fb5d509c173c190,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738582809620525736,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106756c1a9a055e82a094e8ac81f43f3,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55fa319a-3fa0-47cc-a911-4892f27f7d48 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.378376284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7cbf3bb1-a7e2-4ab6-a074-e6e522c6d219 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.378482800Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7cbf3bb1-a7e2-4ab6-a074-e6e522c6d219 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.379804930Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05d229d6-1417-43c2-8260-4f1ad7ba345c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.380210086Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738582906380187923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05d229d6-1417-43c2-8260-4f1ad7ba345c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.380829927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b7bd633-f5b2-46ae-b08e-a43cb337857e name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.380909413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b7bd633-f5b2-46ae-b08e-a43cb337857e name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.381358699Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7dec8234ece4f96a7f226bf5232bf06f1494c43c439e3c04cb292ae1a70d0f45,PodSandboxId:19cb97488df9f4890676f3b8d28db36bf0d0be1670c2d0912e7db133c139cf6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738582903135415972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7zrrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a0d7c-c194-4892-800c-ff682a08a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8b076dccdd703dc75b2f508ff62a59e6fd3d7235c1e4ef9b3c6b825ff7a69e,PodSandboxId:580e35509b427b22b49df21fc93415c493df1f8867d4e36a04dd440029d6b5d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738582899600903928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upg
rade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29f0377fa5ee4e99a2715c1a9031403,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d594d6f504a381492020cd1b109439ff8089cfbf922174736d17fcf41cef8fb1,PodSandboxId:df74fac204428ed18bc7f74678b8b71e9abaf84dd6307a0da0762d8eafe04fe3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738582899570455313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-7
00839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00d882a7f4a506824d33e42b702ae7b3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4af866feccf00fa25f8ee548b0f4ed180105a3e3753a9b9db721397ece090ee,PodSandboxId:7ba6e2ae438d0e76c66e90cffba140d1304e5f2e40d8381705676b99f2fbc172,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738582899551656961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-700839,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a1c7eab15160c343857fad2cf8fa4d83,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0efd4f531524aeaa677d6ce20f1b5c79ac5a0493f0e73cd91b6cc1ea66cc161b,PodSandboxId:ec27f386367fa185b60f67c9ed7e135ee9a5af6e1854cba73a2fd2d7083a01f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738582893987132163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jpw27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
483be4ea-682d-4830-abfb-28ad521fa94f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd674916a6274b846001986cec46f618181c9bb4331cd9fdad4e2ab2ba0f02f6,PodSandboxId:da4c72f041cc5f0cb9ef8f1b22bf298adb3b4d89c4b9907cf9f7db34c8ac66f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1738582876077384329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e69acd-910d-4bb4-a6a6-62604d394f5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b941eb5fe4faa7e91c6aa7759d9b24f4546e6a80ae5f9e5d260c1b6bb2c5702,PodSandboxId:b6d2f2d2884133a0f9af6b5aeaeaf9f7f3fd901c047ca75a366190d6a3b437fd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738582874
079985988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k59wk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8003ccf3-8f64-4745-b39a-6c4493831e88,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79d2665073ef9a1a0b4efd86c4f5d5ee2bb52908ddd3359187baaca5508ed38e,PodSandboxId:a14b1f6f91d349de592f686ccba0956fc6892737439dd71b321ea9dab5691301,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738582871788518440,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106756c1a9a055e82a094e8ac81f43f3,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3664cb0932b721d73093cee5a14b929e879e5c8c7b5d897b69ed7aaee0ee8c0a,PodSandboxId:ec27f386367fa185b60f67c9ed7e135ee9a5af6e1854cba73a2fd2d7083a01f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738582870730288846,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jpw27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483be4ea-682d-4830-abfb-28ad521fa94f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a37e803007efad8d1d6e38be9cbb2bab41cbceb9f2490f22c86352792f9717b,PodSandboxId:19cb97488df9f4890676f3b8d28db36bf0d0be1670c2d0912e7db133c139cf6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738582870503850442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7zrrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a0d7c-c194-4892-800c-ff682a08a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50f96f589f4dbb7f968833da6ac37717c959bb0c677dea772d29d0a5d04011f,PodSandboxId:7ba6e2ae438d0e76c66e90cffba140d1304e5f2e40d8381
705676b99f2fbc172,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738582869719707697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7eab15160c343857fad2cf8fa4d83,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54106bb8937b2e34e37f0addcf341d653d6d27608ea1d43a8919f8aec641a6e8,PodSandboxId:df74fac204428ed18bc7f74678b8b71e9abaf84dd6307a0da0762d8eafe04fe3,Metadata:&Containe
rMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738582869730616394,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00d882a7f4a506824d33e42b702ae7b3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbe0f72fed8eba5e5916c12793caacaf4fb99bd3483c931b4c5af3417be047b,PodSandboxId:580e35509b427b22b49df21fc93415c493df1f8867d4e36a04dd440029d6b5d2,Metadata:&ContainerMetad
ata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738582869661072255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29f0377fa5ee4e99a2715c1a9031403,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e77583e81299dc6729e8b7be42b86b55ac55f7f36533564939608af9ecda56e,PodSandboxId:6b51b6c914fd0622e4bcb593ba6b30ed5892f57aceb9504ff045cc285ea84d3d,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738582824111323475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e69acd-910d-4bb4-a6a6-62604d394f5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd27fd1fd140a48bac5f258b94374adf6a8d9e521da53fac257df3cb7e8b1ca1,PodSandboxId:4594db9a31b80aca3715b20e3d219391acdcebce68eb82fcc23de313f4060931,Metadata:&Contain
erMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738582822763272269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k59wk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8003ccf3-8f64-4745-b39a-6c4493831e88,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8463f7d2ecab13df6028521883c72eb9b7480b48f52fe805b90df29b608c4485,PodSandboxId:06879ea49ef2b2c7f6767d19f6b319196d280215a6e4b6445fb5d509c173c190,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738582809620525736,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106756c1a9a055e82a094e8ac81f43f3,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b7bd633-f5b2-46ae-b08e-a43cb337857e name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.430795416Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7631d210-7278-4ab9-b496-d33e521e4531 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.430882896Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7631d210-7278-4ab9-b496-d33e521e4531 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.431958897Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13508cc4-f8e5-452a-844e-51945a5f520b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.432325084Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738582906432302613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13508cc4-f8e5-452a-844e-51945a5f520b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.432892663Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d9b8c07-e0e6-46a7-b172-ac374251cbf0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.432957825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d9b8c07-e0e6-46a7-b172-ac374251cbf0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.433323890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7dec8234ece4f96a7f226bf5232bf06f1494c43c439e3c04cb292ae1a70d0f45,PodSandboxId:19cb97488df9f4890676f3b8d28db36bf0d0be1670c2d0912e7db133c139cf6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738582903135415972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7zrrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a0d7c-c194-4892-800c-ff682a08a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8b076dccdd703dc75b2f508ff62a59e6fd3d7235c1e4ef9b3c6b825ff7a69e,PodSandboxId:580e35509b427b22b49df21fc93415c493df1f8867d4e36a04dd440029d6b5d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738582899600903928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upg
rade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29f0377fa5ee4e99a2715c1a9031403,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d594d6f504a381492020cd1b109439ff8089cfbf922174736d17fcf41cef8fb1,PodSandboxId:df74fac204428ed18bc7f74678b8b71e9abaf84dd6307a0da0762d8eafe04fe3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738582899570455313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-7
00839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00d882a7f4a506824d33e42b702ae7b3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4af866feccf00fa25f8ee548b0f4ed180105a3e3753a9b9db721397ece090ee,PodSandboxId:7ba6e2ae438d0e76c66e90cffba140d1304e5f2e40d8381705676b99f2fbc172,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738582899551656961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-700839,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a1c7eab15160c343857fad2cf8fa4d83,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0efd4f531524aeaa677d6ce20f1b5c79ac5a0493f0e73cd91b6cc1ea66cc161b,PodSandboxId:ec27f386367fa185b60f67c9ed7e135ee9a5af6e1854cba73a2fd2d7083a01f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738582893987132163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jpw27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
483be4ea-682d-4830-abfb-28ad521fa94f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd674916a6274b846001986cec46f618181c9bb4331cd9fdad4e2ab2ba0f02f6,PodSandboxId:da4c72f041cc5f0cb9ef8f1b22bf298adb3b4d89c4b9907cf9f7db34c8ac66f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1738582876077384329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e69acd-910d-4bb4-a6a6-62604d394f5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b941eb5fe4faa7e91c6aa7759d9b24f4546e6a80ae5f9e5d260c1b6bb2c5702,PodSandboxId:b6d2f2d2884133a0f9af6b5aeaeaf9f7f3fd901c047ca75a366190d6a3b437fd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738582874
079985988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k59wk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8003ccf3-8f64-4745-b39a-6c4493831e88,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79d2665073ef9a1a0b4efd86c4f5d5ee2bb52908ddd3359187baaca5508ed38e,PodSandboxId:a14b1f6f91d349de592f686ccba0956fc6892737439dd71b321ea9dab5691301,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738582871788518440,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106756c1a9a055e82a094e8ac81f43f3,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3664cb0932b721d73093cee5a14b929e879e5c8c7b5d897b69ed7aaee0ee8c0a,PodSandboxId:ec27f386367fa185b60f67c9ed7e135ee9a5af6e1854cba73a2fd2d7083a01f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738582870730288846,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jpw27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483be4ea-682d-4830-abfb-28ad521fa94f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a37e803007efad8d1d6e38be9cbb2bab41cbceb9f2490f22c86352792f9717b,PodSandboxId:19cb97488df9f4890676f3b8d28db36bf0d0be1670c2d0912e7db133c139cf6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738582870503850442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7zrrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a0d7c-c194-4892-800c-ff682a08a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50f96f589f4dbb7f968833da6ac37717c959bb0c677dea772d29d0a5d04011f,PodSandboxId:7ba6e2ae438d0e76c66e90cffba140d1304e5f2e40d8381
705676b99f2fbc172,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738582869719707697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7eab15160c343857fad2cf8fa4d83,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54106bb8937b2e34e37f0addcf341d653d6d27608ea1d43a8919f8aec641a6e8,PodSandboxId:df74fac204428ed18bc7f74678b8b71e9abaf84dd6307a0da0762d8eafe04fe3,Metadata:&Containe
rMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738582869730616394,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00d882a7f4a506824d33e42b702ae7b3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbe0f72fed8eba5e5916c12793caacaf4fb99bd3483c931b4c5af3417be047b,PodSandboxId:580e35509b427b22b49df21fc93415c493df1f8867d4e36a04dd440029d6b5d2,Metadata:&ContainerMetad
ata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738582869661072255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29f0377fa5ee4e99a2715c1a9031403,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e77583e81299dc6729e8b7be42b86b55ac55f7f36533564939608af9ecda56e,PodSandboxId:6b51b6c914fd0622e4bcb593ba6b30ed5892f57aceb9504ff045cc285ea84d3d,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738582824111323475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e69acd-910d-4bb4-a6a6-62604d394f5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd27fd1fd140a48bac5f258b94374adf6a8d9e521da53fac257df3cb7e8b1ca1,PodSandboxId:4594db9a31b80aca3715b20e3d219391acdcebce68eb82fcc23de313f4060931,Metadata:&Contain
erMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738582822763272269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k59wk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8003ccf3-8f64-4745-b39a-6c4493831e88,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8463f7d2ecab13df6028521883c72eb9b7480b48f52fe805b90df29b608c4485,PodSandboxId:06879ea49ef2b2c7f6767d19f6b319196d280215a6e4b6445fb5d509c173c190,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738582809620525736,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106756c1a9a055e82a094e8ac81f43f3,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d9b8c07-e0e6-46a7-b172-ac374251cbf0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.475240109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=852f790e-41e4-4b70-afb1-57758d107101 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.475322734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=852f790e-41e4-4b70-afb1-57758d107101 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.476284103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3af371bc-3408-487f-b527-27c231212e57 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.476710249Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738582906476685183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3af371bc-3408-487f-b527-27c231212e57 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.477324972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d67eb1ab-7579-4af6-8131-20dcd9405fd8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.477391584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d67eb1ab-7579-4af6-8131-20dcd9405fd8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:41:46 kubernetes-upgrade-700839 crio[2398]: time="2025-02-03 11:41:46.477839853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7dec8234ece4f96a7f226bf5232bf06f1494c43c439e3c04cb292ae1a70d0f45,PodSandboxId:19cb97488df9f4890676f3b8d28db36bf0d0be1670c2d0912e7db133c139cf6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738582903135415972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7zrrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a0d7c-c194-4892-800c-ff682a08a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8b076dccdd703dc75b2f508ff62a59e6fd3d7235c1e4ef9b3c6b825ff7a69e,PodSandboxId:580e35509b427b22b49df21fc93415c493df1f8867d4e36a04dd440029d6b5d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738582899600903928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upg
rade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29f0377fa5ee4e99a2715c1a9031403,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d594d6f504a381492020cd1b109439ff8089cfbf922174736d17fcf41cef8fb1,PodSandboxId:df74fac204428ed18bc7f74678b8b71e9abaf84dd6307a0da0762d8eafe04fe3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738582899570455313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-7
00839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00d882a7f4a506824d33e42b702ae7b3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4af866feccf00fa25f8ee548b0f4ed180105a3e3753a9b9db721397ece090ee,PodSandboxId:7ba6e2ae438d0e76c66e90cffba140d1304e5f2e40d8381705676b99f2fbc172,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738582899551656961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-700839,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a1c7eab15160c343857fad2cf8fa4d83,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0efd4f531524aeaa677d6ce20f1b5c79ac5a0493f0e73cd91b6cc1ea66cc161b,PodSandboxId:ec27f386367fa185b60f67c9ed7e135ee9a5af6e1854cba73a2fd2d7083a01f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738582893987132163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jpw27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
483be4ea-682d-4830-abfb-28ad521fa94f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd674916a6274b846001986cec46f618181c9bb4331cd9fdad4e2ab2ba0f02f6,PodSandboxId:da4c72f041cc5f0cb9ef8f1b22bf298adb3b4d89c4b9907cf9f7db34c8ac66f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1738582876077384329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e69acd-910d-4bb4-a6a6-62604d394f5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b941eb5fe4faa7e91c6aa7759d9b24f4546e6a80ae5f9e5d260c1b6bb2c5702,PodSandboxId:b6d2f2d2884133a0f9af6b5aeaeaf9f7f3fd901c047ca75a366190d6a3b437fd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738582874
079985988,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k59wk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8003ccf3-8f64-4745-b39a-6c4493831e88,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79d2665073ef9a1a0b4efd86c4f5d5ee2bb52908ddd3359187baaca5508ed38e,PodSandboxId:a14b1f6f91d349de592f686ccba0956fc6892737439dd71b321ea9dab5691301,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738582871788518440,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106756c1a9a055e82a094e8ac81f43f3,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3664cb0932b721d73093cee5a14b929e879e5c8c7b5d897b69ed7aaee0ee8c0a,PodSandboxId:ec27f386367fa185b60f67c9ed7e135ee9a5af6e1854cba73a2fd2d7083a01f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738582870730288846,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jpw27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483be4ea-682d-4830-abfb-28ad521fa94f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a37e803007efad8d1d6e38be9cbb2bab41cbceb9f2490f22c86352792f9717b,PodSandboxId:19cb97488df9f4890676f3b8d28db36bf0d0be1670c2d0912e7db133c139cf6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738582870503850442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7zrrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a0d7c-c194-4892-800c-ff682a08a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50f96f589f4dbb7f968833da6ac37717c959bb0c677dea772d29d0a5d04011f,PodSandboxId:7ba6e2ae438d0e76c66e90cffba140d1304e5f2e40d8381
705676b99f2fbc172,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738582869719707697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7eab15160c343857fad2cf8fa4d83,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54106bb8937b2e34e37f0addcf341d653d6d27608ea1d43a8919f8aec641a6e8,PodSandboxId:df74fac204428ed18bc7f74678b8b71e9abaf84dd6307a0da0762d8eafe04fe3,Metadata:&Containe
rMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738582869730616394,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00d882a7f4a506824d33e42b702ae7b3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbe0f72fed8eba5e5916c12793caacaf4fb99bd3483c931b4c5af3417be047b,PodSandboxId:580e35509b427b22b49df21fc93415c493df1f8867d4e36a04dd440029d6b5d2,Metadata:&ContainerMetad
ata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738582869661072255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29f0377fa5ee4e99a2715c1a9031403,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e77583e81299dc6729e8b7be42b86b55ac55f7f36533564939608af9ecda56e,PodSandboxId:6b51b6c914fd0622e4bcb593ba6b30ed5892f57aceb9504ff045cc285ea84d3d,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738582824111323475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e69acd-910d-4bb4-a6a6-62604d394f5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd27fd1fd140a48bac5f258b94374adf6a8d9e521da53fac257df3cb7e8b1ca1,PodSandboxId:4594db9a31b80aca3715b20e3d219391acdcebce68eb82fcc23de313f4060931,Metadata:&Contain
erMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738582822763272269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k59wk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8003ccf3-8f64-4745-b39a-6c4493831e88,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8463f7d2ecab13df6028521883c72eb9b7480b48f52fe805b90df29b608c4485,PodSandboxId:06879ea49ef2b2c7f6767d19f6b319196d280215a6e4b6445fb5d509c173c190,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738582809620525736,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-700839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106756c1a9a055e82a094e8ac81f43f3,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d67eb1ab-7579-4af6-8131-20dcd9405fd8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7dec8234ece4f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago        Running             coredns                   2                   19cb97488df9f       coredns-668d6bf9bc-7zrrj
	fa8b076dccdd7       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   6 seconds ago        Running             kube-controller-manager   2                   580e35509b427       kube-controller-manager-kubernetes-upgrade-700839
	d594d6f504a38       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   6 seconds ago        Running             kube-apiserver            2                   df74fac204428       kube-apiserver-kubernetes-upgrade-700839
	c4af866feccf0       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   7 seconds ago        Running             etcd                      2                   7ba6e2ae438d0       etcd-kubernetes-upgrade-700839
	0efd4f531524a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago       Running             coredns                   2                   ec27f386367fa       coredns-668d6bf9bc-jpw27
	cd674916a6274       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   30 seconds ago       Running             storage-provisioner       2                   da4c72f041cc5       storage-provisioner
	0b941eb5fe4fa       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   32 seconds ago       Running             kube-proxy                1                   b6d2f2d288413       kube-proxy-k59wk
	79d2665073ef9       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   34 seconds ago       Running             kube-scheduler            1                   a14b1f6f91d34       kube-scheduler-kubernetes-upgrade-700839
	3664cb0932b72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   35 seconds ago       Exited              coredns                   1                   ec27f386367fa       coredns-668d6bf9bc-jpw27
	6a37e803007ef       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   36 seconds ago       Exited              coredns                   1                   19cb97488df9f       coredns-668d6bf9bc-7zrrj
	54106bb8937b2       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   36 seconds ago       Exited              kube-apiserver            1                   df74fac204428       kube-apiserver-kubernetes-upgrade-700839
	a50f96f589f4d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   36 seconds ago       Exited              etcd                      1                   7ba6e2ae438d0       etcd-kubernetes-upgrade-700839
	1dbe0f72fed8e       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   36 seconds ago       Exited              kube-controller-manager   1                   580e35509b427       kube-controller-manager-kubernetes-upgrade-700839
	6e77583e81299       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       1                   6b51b6c914fd0       storage-provisioner
	fd27fd1fd140a       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   About a minute ago   Exited              kube-proxy                0                   4594db9a31b80       kube-proxy-k59wk
	8463f7d2ecab1       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   About a minute ago   Exited              kube-scheduler            0                   06879ea49ef2b       kube-scheduler-kubernetes-upgrade-700839
	
	
	==> coredns [0efd4f531524aeaa677d6ce20f1b5c79ac5a0493f0e73cd91b6cc1ea66cc161b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [3664cb0932b721d73093cee5a14b929e879e5c8c7b5d897b69ed7aaee0ee8c0a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6a37e803007efad8d1d6e38be9cbb2bab41cbceb9f2490f22c86352792f9717b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7dec8234ece4f96a7f226bf5232bf06f1494c43c439e3c04cb292ae1a70d0f45] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-700839
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-700839
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 11:40:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-700839
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 11:41:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 11:41:42 +0000   Mon, 03 Feb 2025 11:40:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 11:41:42 +0000   Mon, 03 Feb 2025 11:40:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 11:41:42 +0000   Mon, 03 Feb 2025 11:40:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 11:41:42 +0000   Mon, 03 Feb 2025 11:40:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.247
	  Hostname:    kubernetes-upgrade-700839
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e517e7679664c7cb8ebfcd16d9119a2
	  System UUID:                0e517e76-7966-4c7c-b8eb-fcd16d9119a2
	  Boot ID:                    341c13f8-4aea-4b58-a16a-d4aa5de0210d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-7zrrj                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     85s
	  kube-system                 coredns-668d6bf9bc-jpw27                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     85s
	  kube-system                 etcd-kubernetes-upgrade-700839                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         85s
	  kube-system                 kube-apiserver-kubernetes-upgrade-700839             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-700839    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-k59wk                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-kubernetes-upgrade-700839             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 83s                kube-proxy       
	  Normal   Starting                 32s                kube-proxy       
	  Normal   Starting                 98s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  97s (x8 over 98s)  kubelet          Node kubernetes-upgrade-700839 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    97s (x8 over 98s)  kubelet          Node kubernetes-upgrade-700839 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     97s (x7 over 98s)  kubelet          Node kubernetes-upgrade-700839 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  97s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           86s                node-controller  Node kubernetes-upgrade-700839 event: Registered Node kubernetes-upgrade-700839 in Controller
	  Warning  ContainerGCFailed        38s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           30s                node-controller  Node kubernetes-upgrade-700839 event: Registered Node kubernetes-upgrade-700839 in Controller
	  Normal   RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-700839 event: Registered Node kubernetes-upgrade-700839 in Controller
	
	
	==> dmesg <==
	[Feb 3 11:40] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.057056] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065636] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.229578] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.141243] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.285248] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +4.742958] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +0.074348] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.150553] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[ +12.714489] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.703135] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	[ +13.962874] kauditd_printk_skb: 107 callbacks suppressed
	[Feb 3 11:41] systemd-fstab-generator[2309]: Ignoring "noauto" option for root device
	[  +0.173289] systemd-fstab-generator[2321]: Ignoring "noauto" option for root device
	[  +0.203895] systemd-fstab-generator[2335]: Ignoring "noauto" option for root device
	[  +0.176223] systemd-fstab-generator[2347]: Ignoring "noauto" option for root device
	[  +0.301050] systemd-fstab-generator[2375]: Ignoring "noauto" option for root device
	[  +7.289322] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.886698] systemd-fstab-generator[3047]: Ignoring "noauto" option for root device
	[  +4.678534] kauditd_printk_skb: 108 callbacks suppressed
	[  +6.636647] kauditd_printk_skb: 4 callbacks suppressed
	[ +16.882312] systemd-fstab-generator[3776]: Ignoring "noauto" option for root device
	[  +0.566291] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.244638] systemd-fstab-generator[4080]: Ignoring "noauto" option for root device
	[  +0.120146] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [a50f96f589f4dbb7f968833da6ac37717c959bb0c677dea772d29d0a5d04011f] <==
	{"level":"info","ts":"2025-02-03T11:41:11.279293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8796df0f8efd9e9a became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-03T11:41:11.279314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8796df0f8efd9e9a received MsgPreVoteResp from 8796df0f8efd9e9a at term 2"}
	{"level":"info","ts":"2025-02-03T11:41:11.279327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8796df0f8efd9e9a became candidate at term 3"}
	{"level":"info","ts":"2025-02-03T11:41:11.279333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8796df0f8efd9e9a received MsgVoteResp from 8796df0f8efd9e9a at term 3"}
	{"level":"info","ts":"2025-02-03T11:41:11.279344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8796df0f8efd9e9a became leader at term 3"}
	{"level":"info","ts":"2025-02-03T11:41:11.279351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8796df0f8efd9e9a elected leader 8796df0f8efd9e9a at term 3"}
	{"level":"info","ts":"2025-02-03T11:41:11.289862Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"8796df0f8efd9e9a","local-member-attributes":"{Name:kubernetes-upgrade-700839 ClientURLs:[https://192.168.50.247:2379]}","request-path":"/0/members/8796df0f8efd9e9a/attributes","cluster-id":"ac57774300be467d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-03T11:41:11.289927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-03T11:41:11.290338Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-03T11:41:11.295778Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-03T11:41:11.295866Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-03T11:41:11.296490Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-03T11:41:11.297526Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.247:2379"}
	{"level":"info","ts":"2025-02-03T11:41:11.301888Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-03T11:41:11.302393Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-03T11:41:36.969278Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-03T11:41:36.969429Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"kubernetes-upgrade-700839","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.247:2380"],"advertise-client-urls":["https://192.168.50.247:2379"]}
	{"level":"warn","ts":"2025-02-03T11:41:36.969578Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-03T11:41:36.969669Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-03T11:41:36.971462Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.247:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-03T11:41:36.971493Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.247:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-03T11:41:36.971576Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8796df0f8efd9e9a","current-leader-member-id":"8796df0f8efd9e9a"}
	{"level":"info","ts":"2025-02-03T11:41:36.975004Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.50.247:2380"}
	{"level":"info","ts":"2025-02-03T11:41:36.975108Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.50.247:2380"}
	{"level":"info","ts":"2025-02-03T11:41:36.975118Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"kubernetes-upgrade-700839","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.247:2380"],"advertise-client-urls":["https://192.168.50.247:2379"]}
	
	
	==> etcd [c4af866feccf00fa25f8ee548b0f4ed180105a3e3753a9b9db721397ece090ee] <==
	{"level":"info","ts":"2025-02-03T11:41:39.866314Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-03T11:41:39.875560Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-03T11:41:39.875675Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-03T11:41:39.876663Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-03T11:41:39.881764Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-03T11:41:39.882011Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.50.247:2380"}
	{"level":"info","ts":"2025-02-03T11:41:39.882142Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.50.247:2380"}
	{"level":"info","ts":"2025-02-03T11:41:39.883253Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"8796df0f8efd9e9a","initial-advertise-peer-urls":["https://192.168.50.247:2380"],"listen-peer-urls":["https://192.168.50.247:2380"],"advertise-client-urls":["https://192.168.50.247:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.247:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-03T11:41:39.883306Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-03T11:41:41.028019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8796df0f8efd9e9a is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-03T11:41:41.028086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8796df0f8efd9e9a became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-03T11:41:41.028110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8796df0f8efd9e9a received MsgPreVoteResp from 8796df0f8efd9e9a at term 3"}
	{"level":"info","ts":"2025-02-03T11:41:41.028132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8796df0f8efd9e9a became candidate at term 4"}
	{"level":"info","ts":"2025-02-03T11:41:41.028140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8796df0f8efd9e9a received MsgVoteResp from 8796df0f8efd9e9a at term 4"}
	{"level":"info","ts":"2025-02-03T11:41:41.028148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8796df0f8efd9e9a became leader at term 4"}
	{"level":"info","ts":"2025-02-03T11:41:41.028154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8796df0f8efd9e9a elected leader 8796df0f8efd9e9a at term 4"}
	{"level":"info","ts":"2025-02-03T11:41:41.029831Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"8796df0f8efd9e9a","local-member-attributes":"{Name:kubernetes-upgrade-700839 ClientURLs:[https://192.168.50.247:2379]}","request-path":"/0/members/8796df0f8efd9e9a/attributes","cluster-id":"ac57774300be467d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-03T11:41:41.029881Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-03T11:41:41.030296Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-03T11:41:41.030893Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-03T11:41:41.031426Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-03T11:41:41.031651Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-03T11:41:41.031685Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-03T11:41:41.031964Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-03T11:41:41.037761Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.247:2379"}
	
	
	==> kernel <==
	 11:41:47 up 2 min,  0 users,  load average: 1.00, 0.44, 0.16
	Linux kubernetes-upgrade-700839 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [54106bb8937b2e34e37f0addcf341d653d6d27608ea1d43a8919f8aec641a6e8] <==
	I0203 11:41:26.409247       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0203 11:41:26.409264       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0203 11:41:26.409302       1 controller.go:132] Ending legacy_token_tracking_controller
	I0203 11:41:26.409312       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0203 11:41:26.409325       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0203 11:41:26.409353       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0203 11:41:26.409368       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0203 11:41:26.409799       1 storage_flowcontrol.go:172] APF bootstrap ensurer is exiting
	I0203 11:41:26.409828       1 cluster_authentication_trust_controller.go:485] Shutting down cluster_authentication_trust_controller controller
	I0203 11:41:26.410129       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 11:41:26.410793       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0203 11:41:26.410874       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0203 11:41:26.411142       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 11:41:26.411256       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 11:41:26.411295       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0203 11:41:26.411348       1 controller.go:157] Shutting down quota evaluator
	I0203 11:41:26.411376       1 controller.go:176] quota evaluator worker shutdown
	I0203 11:41:26.412104       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0203 11:41:26.412187       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0203 11:41:26.413740       1 controller.go:176] quota evaluator worker shutdown
	I0203 11:41:26.413850       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0203 11:41:26.413877       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 11:41:26.414051       1 controller.go:176] quota evaluator worker shutdown
	I0203 11:41:26.414074       1 controller.go:176] quota evaluator worker shutdown
	I0203 11:41:26.414126       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [d594d6f504a381492020cd1b109439ff8089cfbf922174736d17fcf41cef8fb1] <==
	I0203 11:41:42.498407       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0203 11:41:42.498663       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0203 11:41:42.498913       1 aggregator.go:171] initial CRD sync complete...
	I0203 11:41:42.498947       1 autoregister_controller.go:144] Starting autoregister controller
	I0203 11:41:42.498964       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0203 11:41:42.498980       1 cache.go:39] Caches are synced for autoregister controller
	I0203 11:41:42.501656       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0203 11:41:42.513381       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0203 11:41:42.549020       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0203 11:41:42.589935       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 11:41:42.590060       1 policy_source.go:240] refreshing policies
	I0203 11:41:42.594893       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0203 11:41:42.601746       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0203 11:41:42.601785       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0203 11:41:42.612981       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0203 11:41:42.926029       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 11:41:43.405288       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0203 11:41:43.620311       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.247]
	I0203 11:41:43.621774       1 controller.go:615] quota admission added evaluator for: endpoints
	I0203 11:41:43.631536       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0203 11:41:44.147271       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 11:41:44.184131       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 11:41:44.215488       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 11:41:44.222580       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0203 11:41:45.820344       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1dbe0f72fed8eba5e5916c12793caacaf4fb99bd3483c931b4c5af3417be047b] <==
	I0203 11:41:16.742947       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 11:41:16.743512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.360717ms"
	I0203 11:41:16.743830       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.683µs"
	I0203 11:41:16.745000       1 shared_informer.go:320] Caches are synced for TTL
	I0203 11:41:16.748283       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 11:41:16.750659       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 11:41:16.756009       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 11:41:16.756043       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 11:41:16.756053       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 11:41:16.760565       1 shared_informer.go:320] Caches are synced for taint
	I0203 11:41:16.760682       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 11:41:16.760757       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-700839"
	I0203 11:41:16.760799       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0203 11:41:16.764338       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 11:41:16.764656       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 11:41:16.769663       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 11:41:16.773979       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 11:41:16.776432       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 11:41:16.776630       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 11:41:16.776707       1 shared_informer.go:320] Caches are synced for HPA
	I0203 11:41:16.778136       1 shared_informer.go:320] Caches are synced for GC
	I0203 11:41:16.778255       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 11:41:16.778555       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 11:41:21.564673       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.796µs"
	I0203 11:41:22.565957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="63.483µs"
	
	
	==> kube-controller-manager [fa8b076dccdd703dc75b2f508ff62a59e6fd3d7235c1e4ef9b3c6b825ff7a69e] <==
	I0203 11:41:45.782510       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 11:41:45.782514       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 11:41:45.782519       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 11:41:45.782638       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-700839"
	I0203 11:41:45.786906       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 11:41:45.790453       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 11:41:45.790557       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-700839"
	I0203 11:41:45.793977       1 shared_informer.go:320] Caches are synced for GC
	I0203 11:41:45.798275       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 11:41:45.800644       1 shared_informer.go:320] Caches are synced for job
	I0203 11:41:45.810308       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 11:41:45.811260       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 11:41:45.811943       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 11:41:45.813132       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 11:41:45.814187       1 shared_informer.go:320] Caches are synced for namespace
	I0203 11:41:45.816784       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 11:41:45.816882       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 11:41:45.817229       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 11:41:45.817266       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 11:41:45.817652       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 11:41:45.824194       1 shared_informer.go:320] Caches are synced for service account
	I0203 11:41:45.825718       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 11:41:45.829822       1 shared_informer.go:320] Caches are synced for expand
	I0203 11:41:45.831535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="86.639416ms"
	I0203 11:41:45.831731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.593µs"
	
	
	==> kube-proxy [0b941eb5fe4faa7e91c6aa7759d9b24f4546e6a80ae5f9e5d260c1b6bb2c5702] <==
	E0203 11:41:14.672400       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0203 11:41:14.684380       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.247"]
	E0203 11:41:14.684740       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 11:41:14.761840       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 11:41:14.761900       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 11:41:14.761934       1 server_linux.go:170] "Using iptables Proxier"
	I0203 11:41:14.775316       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 11:41:14.776142       1 server.go:497] "Version info" version="v1.32.1"
	I0203 11:41:14.776185       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 11:41:14.784878       1 config.go:199] "Starting service config controller"
	I0203 11:41:14.785360       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 11:41:14.785502       1 config.go:105] "Starting endpoint slice config controller"
	I0203 11:41:14.785511       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 11:41:14.786416       1 config.go:329] "Starting node config controller"
	I0203 11:41:14.786447       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 11:41:14.886906       1 shared_informer.go:320] Caches are synced for node config
	I0203 11:41:14.886980       1 shared_informer.go:320] Caches are synced for service config
	I0203 11:41:14.886994       1 shared_informer.go:320] Caches are synced for endpoint slice config
	E0203 11:41:42.510188       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0203 11:41:42.511283       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0203 11:41:42.511340       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: nodes \"kubernetes-upgrade-700839\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	
	
	==> kube-proxy [fd27fd1fd140a48bac5f258b94374adf6a8d9e521da53fac257df3cb7e8b1ca1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0203 11:40:23.223225       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0203 11:40:23.293356       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.247"]
	E0203 11:40:23.293822       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 11:40:23.492536       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 11:40:23.492569       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 11:40:23.492644       1 server_linux.go:170] "Using iptables Proxier"
	I0203 11:40:23.507303       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 11:40:23.507734       1 server.go:497] "Version info" version="v1.32.1"
	I0203 11:40:23.507755       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 11:40:23.515197       1 config.go:199] "Starting service config controller"
	I0203 11:40:23.518772       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 11:40:23.519200       1 config.go:329] "Starting node config controller"
	I0203 11:40:23.519265       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 11:40:23.522355       1 config.go:105] "Starting endpoint slice config controller"
	I0203 11:40:23.522460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 11:40:23.624081       1 shared_informer.go:320] Caches are synced for node config
	I0203 11:40:23.624312       1 shared_informer.go:320] Caches are synced for service config
	I0203 11:40:23.630145       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [79d2665073ef9a1a0b4efd86c4f5d5ee2bb52908ddd3359187baaca5508ed38e] <==
	I0203 11:41:13.452756       1 serving.go:386] Generated self-signed cert in-memory
	I0203 11:41:14.308811       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 11:41:14.308866       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 11:41:14.319144       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 11:41:14.319434       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0203 11:41:14.319670       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0203 11:41:14.319868       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 11:41:14.319899       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 11:41:14.319974       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0203 11:41:14.319999       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0203 11:41:14.319444       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 11:41:14.419924       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0203 11:41:14.420072       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 11:41:14.420140       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	E0203 11:41:42.439204       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	E0203 11:41:42.465377       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0203 11:41:42.465457       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0203 11:41:42.465520       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0203 11:41:42.465634       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0203 11:41:42.465672       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0203 11:41:42.465691       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0203 11:41:42.465719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0203 11:41:42.465761       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0203 11:41:42.465796       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	
	
	==> kube-scheduler [8463f7d2ecab13df6028521883c72eb9b7480b48f52fe805b90df29b608c4485] <==
	W0203 11:40:13.896812       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0203 11:40:13.896872       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:40:14.012959       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0203 11:40:14.013142       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 11:40:14.041204       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0203 11:40:14.041522       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 11:40:14.049089       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0203 11:40:14.049159       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0203 11:40:14.175390       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0203 11:40:14.175459       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0203 11:40:14.279438       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0203 11:40:14.279723       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 11:40:14.279871       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0203 11:40:14.279914       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:40:14.426783       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0203 11:40:14.426877       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:40:14.453487       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0203 11:40:14.453555       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0203 11:40:14.480402       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0203 11:40:14.480462       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 11:40:15.995770       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 11:40:54.672113       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0203 11:40:54.672246       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0203 11:40:54.672352       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0203 11:40:54.672427       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 03 11:41:41 kubernetes-upgrade-700839 kubelet[3783]: E0203 11:41:41.037261    3783 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-700839\" not found" node="kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: E0203 11:41:42.047386    3783 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-700839\" not found" node="kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: E0203 11:41:42.047705    3783 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-700839\" not found" node="kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: E0203 11:41:42.047577    3783 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-700839\" not found" node="kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: E0203 11:41:42.047502    3783 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-700839\" not found" node="kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.539947    3783 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.603682    3783 kubelet_node_status.go:125] "Node was previously registered" node="kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.604033    3783 kubelet_node_status.go:79] "Successfully registered node" node="kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.604312    3783 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.605949    3783 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: E0203 11:41:42.661425    3783 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-kubernetes-upgrade-700839\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.661480    3783 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: E0203 11:41:42.671230    3783 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-kubernetes-upgrade-700839\" already exists" pod="kube-system/etcd-kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.671283    3783 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: E0203 11:41:42.680429    3783 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-kubernetes-upgrade-700839\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.680485    3783 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: E0203 11:41:42.690905    3783 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-kubernetes-upgrade-700839\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-700839"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.811471    3783 apiserver.go:52] "Watching apiserver"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.846191    3783 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.916089    3783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/99e69acd-910d-4bb4-a6a6-62604d394f5f-tmp\") pod \"storage-provisioner\" (UID: \"99e69acd-910d-4bb4-a6a6-62604d394f5f\") " pod="kube-system/storage-provisioner"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.916501    3783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8003ccf3-8f64-4745-b39a-6c4493831e88-lib-modules\") pod \"kube-proxy-k59wk\" (UID: \"8003ccf3-8f64-4745-b39a-6c4493831e88\") " pod="kube-system/kube-proxy-k59wk"
	Feb 03 11:41:42 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:42.916684    3783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8003ccf3-8f64-4745-b39a-6c4493831e88-xtables-lock\") pod \"kube-proxy-k59wk\" (UID: \"8003ccf3-8f64-4745-b39a-6c4493831e88\") " pod="kube-system/kube-proxy-k59wk"
	Feb 03 11:41:43 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:43.049747    3783 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-700839"
	Feb 03 11:41:43 kubernetes-upgrade-700839 kubelet[3783]: E0203 11:41:43.064495    3783 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-kubernetes-upgrade-700839\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-700839"
	Feb 03 11:41:43 kubernetes-upgrade-700839 kubelet[3783]: I0203 11:41:43.122846    3783 scope.go:117] "RemoveContainer" containerID="6a37e803007efad8d1d6e38be9cbb2bab41cbceb9f2490f22c86352792f9717b"
	
	
	==> storage-provisioner [6e77583e81299dc6729e8b7be42b86b55ac55f7f36533564939608af9ecda56e] <==
	I0203 11:40:24.216116       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0203 11:40:24.226174       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0203 11:40:24.226333       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0203 11:40:24.240172       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0203 11:40:24.240364       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-700839_3ffad234-00fe-4234-8e08-3ca08b07f2f4!
	I0203 11:40:24.240440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a58e6887-cb08-4b72-b96f-d9d6560877a1", APIVersion:"v1", ResourceVersion:"376", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-700839_3ffad234-00fe-4234-8e08-3ca08b07f2f4 became leader
	I0203 11:40:24.341246       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-700839_3ffad234-00fe-4234-8e08-3ca08b07f2f4!
	
	
	==> storage-provisioner [cd674916a6274b846001986cec46f618181c9bb4331cd9fdad4e2ab2ba0f02f6] <==
	I0203 11:41:16.156223       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0203 11:41:16.167821       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0203 11:41:16.167915       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0203 11:41:27.482528       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0203 11:41:30.534764       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0203 11:41:33.554412       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0203 11:41:37.203679       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0203 11:41:39.362637       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0203 11:41:41.739285       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0203 11:41:43.972499       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0203 11:41:46.695518       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-700839 -n kubernetes-upgrade-700839
I0203 11:41:47.902636  116606 config.go:182] Loaded profile config "flannel-927018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-700839 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-700839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-700839
--- FAIL: TestKubernetesUpgrade (402.85s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-225830 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-225830 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.656797726s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-225830] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-225830" primary control-plane node in "pause-225830" cluster
	* Updating the running kvm2 "pause-225830" VM ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-225830" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:37:06.100684  156311 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:37:06.100780  156311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:37:06.100787  156311 out.go:358] Setting ErrFile to fd 2...
	I0203 11:37:06.100791  156311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:37:06.101025  156311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:37:06.101592  156311 out.go:352] Setting JSON to false
	I0203 11:37:06.102726  156311 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8368,"bootTime":1738574258,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 11:37:06.102881  156311 start.go:139] virtualization: kvm guest
	I0203 11:37:06.105526  156311 out.go:177] * [pause-225830] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 11:37:06.106911  156311 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:37:06.106919  156311 notify.go:220] Checking for updates...
	I0203 11:37:06.109643  156311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:37:06.110875  156311 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:37:06.111978  156311 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:37:06.113099  156311 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 11:37:06.114277  156311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:37:06.116096  156311 config.go:182] Loaded profile config "pause-225830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:37:06.116767  156311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:37:06.116880  156311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:37:06.135268  156311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41025
	I0203 11:37:06.135787  156311 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:37:06.136465  156311 main.go:141] libmachine: Using API Version  1
	I0203 11:37:06.136518  156311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:37:06.136891  156311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:37:06.137166  156311 main.go:141] libmachine: (pause-225830) Calling .DriverName
	I0203 11:37:06.137507  156311 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:37:06.137965  156311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:37:06.138040  156311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:37:06.156246  156311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40947
	I0203 11:37:06.156655  156311 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:37:06.157148  156311 main.go:141] libmachine: Using API Version  1
	I0203 11:37:06.157170  156311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:37:06.157477  156311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:37:06.157655  156311 main.go:141] libmachine: (pause-225830) Calling .DriverName
	I0203 11:37:06.198351  156311 out.go:177] * Using the kvm2 driver based on existing profile
	I0203 11:37:06.199606  156311 start.go:297] selected driver: kvm2
	I0203 11:37:06.199624  156311 start.go:901] validating driver "kvm2" against &{Name:pause-225830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-225830 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:37:06.199764  156311 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:37:06.200106  156311 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:37:06.200208  156311 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 11:37:06.217422  156311 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 11:37:06.218593  156311 cni.go:84] Creating CNI manager for ""
	I0203 11:37:06.218665  156311 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:37:06.218740  156311 start.go:340] cluster config:
	{Name:pause-225830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-225830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:f
alse storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:37:06.218948  156311 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:37:06.220812  156311 out.go:177] * Starting "pause-225830" primary control-plane node in "pause-225830" cluster
	I0203 11:37:06.221951  156311 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:37:06.222015  156311 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0203 11:37:06.222029  156311 cache.go:56] Caching tarball of preloaded images
	I0203 11:37:06.222136  156311 preload.go:172] Found /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0203 11:37:06.222153  156311 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0203 11:37:06.222328  156311 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/pause-225830/config.json ...
	I0203 11:37:06.222582  156311 start.go:360] acquireMachinesLock for pause-225830: {Name:mk4d774b88f87fe0539ca3e30dd98aae8a4d5437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:37:06.222639  156311 start.go:364] duration metric: took 31.485µs to acquireMachinesLock for "pause-225830"
	I0203 11:37:06.222660  156311 start.go:96] Skipping create...Using existing machine configuration
	I0203 11:37:06.222670  156311 fix.go:54] fixHost starting: 
	I0203 11:37:06.223091  156311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:37:06.223149  156311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:37:06.238033  156311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I0203 11:37:06.238587  156311 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:37:06.239221  156311 main.go:141] libmachine: Using API Version  1
	I0203 11:37:06.239248  156311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:37:06.239610  156311 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:37:06.239814  156311 main.go:141] libmachine: (pause-225830) Calling .DriverName
	I0203 11:37:06.239965  156311 main.go:141] libmachine: (pause-225830) Calling .GetState
	I0203 11:37:06.241754  156311 fix.go:112] recreateIfNeeded on pause-225830: state=Running err=<nil>
	W0203 11:37:06.241791  156311 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 11:37:06.243678  156311 out.go:177] * Updating the running kvm2 "pause-225830" VM ...
	I0203 11:37:06.244913  156311 machine.go:93] provisionDockerMachine start ...
	I0203 11:37:06.244960  156311 main.go:141] libmachine: (pause-225830) Calling .DriverName
	I0203 11:37:06.245211  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHHostname
	I0203 11:37:06.247701  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:06.248506  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:06.248550  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:06.248570  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHPort
	I0203 11:37:06.248767  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:06.248988  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:06.251117  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHUsername
	I0203 11:37:06.254300  156311 main.go:141] libmachine: Using SSH client type: native
	I0203 11:37:06.254565  156311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0203 11:37:06.254584  156311 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:37:06.383724  156311 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-225830
	
	I0203 11:37:06.383761  156311 main.go:141] libmachine: (pause-225830) Calling .GetMachineName
	I0203 11:37:06.384092  156311 buildroot.go:166] provisioning hostname "pause-225830"
	I0203 11:37:06.384116  156311 main.go:141] libmachine: (pause-225830) Calling .GetMachineName
	I0203 11:37:06.384331  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHHostname
	I0203 11:37:06.387509  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:06.388041  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:06.388073  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:06.388208  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHPort
	I0203 11:37:06.388454  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:06.388639  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:06.388807  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHUsername
	I0203 11:37:06.389006  156311 main.go:141] libmachine: Using SSH client type: native
	I0203 11:37:06.389251  156311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0203 11:37:06.389274  156311 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-225830 && echo "pause-225830" | sudo tee /etc/hostname
	I0203 11:37:06.528871  156311 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-225830
	
	I0203 11:37:06.528993  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHHostname
	I0203 11:37:06.532503  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:06.532941  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:06.532974  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:06.533218  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHPort
	I0203 11:37:06.533442  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:06.533644  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:06.533812  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHUsername
	I0203 11:37:06.533990  156311 main.go:141] libmachine: Using SSH client type: native
	I0203 11:37:06.534253  156311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0203 11:37:06.534297  156311 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-225830' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-225830/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-225830' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:37:06.655307  156311 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:37:06.655349  156311 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20354-109432/.minikube CaCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20354-109432/.minikube}
	I0203 11:37:06.655381  156311 buildroot.go:174] setting up certificates
	I0203 11:37:06.655393  156311 provision.go:84] configureAuth start
	I0203 11:37:06.655419  156311 main.go:141] libmachine: (pause-225830) Calling .GetMachineName
	I0203 11:37:06.655722  156311 main.go:141] libmachine: (pause-225830) Calling .GetIP
	I0203 11:37:06.659386  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:06.659877  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:06.659910  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:06.660200  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHHostname
	I0203 11:37:06.663545  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:06.663992  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:06.664031  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:06.664199  156311 provision.go:143] copyHostCerts
	I0203 11:37:06.664272  156311 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem, removing ...
	I0203 11:37:06.664298  156311 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem
	I0203 11:37:06.664374  156311 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem (1123 bytes)
	I0203 11:37:06.664504  156311 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem, removing ...
	I0203 11:37:06.664516  156311 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem
	I0203 11:37:06.664542  156311 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem (1679 bytes)
	I0203 11:37:06.664597  156311 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem, removing ...
	I0203 11:37:06.664605  156311 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem
	I0203 11:37:06.664621  156311 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem (1078 bytes)
	I0203 11:37:06.664715  156311 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem org=jenkins.pause-225830 san=[127.0.0.1 192.168.61.90 localhost minikube pause-225830]
	I0203 11:37:06.996857  156311 provision.go:177] copyRemoteCerts
	I0203 11:37:06.996940  156311 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:37:06.996981  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHHostname
	I0203 11:37:07.000528  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:07.000941  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:07.000973  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:07.001129  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHPort
	I0203 11:37:07.001344  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:07.001531  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHUsername
	I0203 11:37:07.001690  156311 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/pause-225830/id_rsa Username:docker}
	I0203 11:37:07.100718  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 11:37:07.143049  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0203 11:37:07.182520  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 11:37:07.223037  156311 provision.go:87] duration metric: took 567.61534ms to configureAuth
	I0203 11:37:07.223073  156311 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:37:07.223358  156311 config.go:182] Loaded profile config "pause-225830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:37:07.223495  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHHostname
	I0203 11:37:07.227599  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:07.228238  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:07.228284  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:07.228526  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHPort
	I0203 11:37:07.228786  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:07.228988  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:07.229154  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHUsername
	I0203 11:37:07.229366  156311 main.go:141] libmachine: Using SSH client type: native
	I0203 11:37:07.229607  156311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0203 11:37:07.229632  156311 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0203 11:37:14.603863  156311 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0203 11:37:14.603898  156311 machine.go:96] duration metric: took 8.358959403s to provisionDockerMachine
	I0203 11:37:14.603947  156311 start.go:293] postStartSetup for "pause-225830" (driver="kvm2")
	I0203 11:37:14.603961  156311 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:37:14.603989  156311 main.go:141] libmachine: (pause-225830) Calling .DriverName
	I0203 11:37:14.604355  156311 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:37:14.604387  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHHostname
	I0203 11:37:14.607457  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:14.607930  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:14.607960  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:14.608235  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHPort
	I0203 11:37:14.608477  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:14.608680  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHUsername
	I0203 11:37:14.608845  156311 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/pause-225830/id_rsa Username:docker}
	I0203 11:37:14.697811  156311 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:37:14.702068  156311 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:37:14.702094  156311 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/addons for local assets ...
	I0203 11:37:14.702187  156311 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/files for local assets ...
	I0203 11:37:14.702287  156311 filesync.go:149] local asset: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem -> 1166062.pem in /etc/ssl/certs
	I0203 11:37:14.702404  156311 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:37:14.712265  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:37:14.740459  156311 start.go:296] duration metric: took 136.493528ms for postStartSetup
	I0203 11:37:14.740505  156311 fix.go:56] duration metric: took 8.517836576s for fixHost
	I0203 11:37:14.740527  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHHostname
	I0203 11:37:14.743641  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:14.744046  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:14.744076  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:14.744335  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHPort
	I0203 11:37:14.744546  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:14.744721  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:14.744853  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHUsername
	I0203 11:37:14.745031  156311 main.go:141] libmachine: Using SSH client type: native
	I0203 11:37:14.745204  156311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0203 11:37:14.745217  156311 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:37:14.863384  156311 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738582634.854175539
	
	I0203 11:37:14.863410  156311 fix.go:216] guest clock: 1738582634.854175539
	I0203 11:37:14.863420  156311 fix.go:229] Guest: 2025-02-03 11:37:14.854175539 +0000 UTC Remote: 2025-02-03 11:37:14.740509864 +0000 UTC m=+8.683641726 (delta=113.665675ms)
	I0203 11:37:14.863446  156311 fix.go:200] guest clock delta is within tolerance: 113.665675ms
	I0203 11:37:14.863453  156311 start.go:83] releasing machines lock for "pause-225830", held for 8.64080196s
	I0203 11:37:14.863480  156311 main.go:141] libmachine: (pause-225830) Calling .DriverName
	I0203 11:37:14.863762  156311 main.go:141] libmachine: (pause-225830) Calling .GetIP
	I0203 11:37:14.866886  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:14.867386  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:14.867414  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:14.867585  156311 main.go:141] libmachine: (pause-225830) Calling .DriverName
	I0203 11:37:14.868308  156311 main.go:141] libmachine: (pause-225830) Calling .DriverName
	I0203 11:37:14.868494  156311 main.go:141] libmachine: (pause-225830) Calling .DriverName
	I0203 11:37:14.868579  156311 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 11:37:14.868640  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHHostname
	I0203 11:37:14.868712  156311 ssh_runner.go:195] Run: cat /version.json
	I0203 11:37:14.868740  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHHostname
	I0203 11:37:14.871568  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:14.871707  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:14.871985  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:14.872013  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:14.872058  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:14.872089  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:14.872251  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHPort
	I0203 11:37:14.872348  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHPort
	I0203 11:37:14.872457  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:14.872535  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHKeyPath
	I0203 11:37:14.872591  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHUsername
	I0203 11:37:14.872690  156311 main.go:141] libmachine: (pause-225830) Calling .GetSSHUsername
	I0203 11:37:14.872756  156311 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/pause-225830/id_rsa Username:docker}
	I0203 11:37:14.872818  156311 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/pause-225830/id_rsa Username:docker}
	I0203 11:37:14.950847  156311 ssh_runner.go:195] Run: systemctl --version
	I0203 11:37:14.984098  156311 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0203 11:37:15.136019  156311 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 11:37:15.143518  156311 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:37:15.143605  156311 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:37:15.153098  156311 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0203 11:37:15.153127  156311 start.go:495] detecting cgroup driver to use...
	I0203 11:37:15.153189  156311 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:37:15.176423  156311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:37:15.193766  156311 docker.go:217] disabling cri-docker service (if available) ...
	I0203 11:37:15.193832  156311 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0203 11:37:15.207492  156311 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0203 11:37:15.221547  156311 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0203 11:37:15.361691  156311 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0203 11:37:15.494455  156311 docker.go:233] disabling docker service ...
	I0203 11:37:15.494545  156311 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0203 11:37:15.510306  156311 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0203 11:37:15.524722  156311 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0203 11:37:15.659488  156311 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0203 11:37:15.802621  156311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0203 11:37:15.818502  156311 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:37:15.869204  156311 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0203 11:37:15.869264  156311 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:37:15.930823  156311 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0203 11:37:15.930909  156311 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:37:16.001710  156311 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:37:16.065904  156311 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:37:16.117254  156311 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:37:16.161063  156311 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:37:16.251445  156311 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:37:16.329303  156311 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:37:16.377991  156311 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:37:16.442186  156311 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:37:16.500724  156311 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:37:16.789001  156311 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0203 11:37:17.533119  156311 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0203 11:37:17.533208  156311 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0203 11:37:17.543354  156311 start.go:563] Will wait 60s for crictl version
	I0203 11:37:17.543427  156311 ssh_runner.go:195] Run: which crictl
	I0203 11:37:17.547949  156311 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:37:17.590382  156311 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0203 11:37:17.590488  156311 ssh_runner.go:195] Run: crio --version
	I0203 11:37:17.627218  156311 ssh_runner.go:195] Run: crio --version
	I0203 11:37:17.667410  156311 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0203 11:37:17.670623  156311 main.go:141] libmachine: (pause-225830) Calling .GetIP
	I0203 11:37:17.673684  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:17.674230  156311 main.go:141] libmachine: (pause-225830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:ca:fa", ip: ""} in network mk-pause-225830: {Iface:virbr1 ExpiryTime:2025-02-03 12:36:29 +0000 UTC Type:0 Mac:52:54:00:32:ca:fa Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:pause-225830 Clientid:01:52:54:00:32:ca:fa}
	I0203 11:37:17.674269  156311 main.go:141] libmachine: (pause-225830) DBG | domain pause-225830 has defined IP address 192.168.61.90 and MAC address 52:54:00:32:ca:fa in network mk-pause-225830
	I0203 11:37:17.674528  156311 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0203 11:37:17.678851  156311 kubeadm.go:883] updating cluster {Name:pause-225830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-225830 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 11:37:17.679022  156311 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:37:17.679088  156311 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:37:17.729324  156311 crio.go:514] all images are preloaded for cri-o runtime.
	I0203 11:37:17.729344  156311 crio.go:433] Images already preloaded, skipping extraction
	I0203 11:37:17.729392  156311 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:37:17.771971  156311 crio.go:514] all images are preloaded for cri-o runtime.
	I0203 11:37:17.771997  156311 cache_images.go:84] Images are preloaded, skipping loading
	I0203 11:37:17.772005  156311 kubeadm.go:934] updating node { 192.168.61.90 8443 v1.32.1 crio true true} ...
	I0203 11:37:17.772100  156311 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-225830 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-225830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:37:17.772170  156311 ssh_runner.go:195] Run: crio config
	I0203 11:37:17.822383  156311 cni.go:84] Creating CNI manager for ""
	I0203 11:37:17.822415  156311 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:37:17.822429  156311 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 11:37:17.822460  156311 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.90 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-225830 NodeName:pause-225830 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 11:37:17.822596  156311 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-225830"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.90"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.90"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 11:37:17.822657  156311 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 11:37:17.833926  156311 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 11:37:17.834028  156311 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 11:37:17.844317  156311 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0203 11:37:17.865628  156311 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:37:17.883142  156311 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0203 11:37:17.904169  156311 ssh_runner.go:195] Run: grep 192.168.61.90	control-plane.minikube.internal$ /etc/hosts
	I0203 11:37:17.909319  156311 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:37:18.113545  156311 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:37:18.152316  156311 certs.go:68] Setting up /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/pause-225830 for IP: 192.168.61.90
	I0203 11:37:18.152343  156311 certs.go:194] generating shared ca certs ...
	I0203 11:37:18.152361  156311 certs.go:226] acquiring lock for ca certs: {Name:mkceafe81f89678b7cbc2a7f6faab4e784fcb207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:37:18.152537  156311 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key
	I0203 11:37:18.152598  156311 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key
	I0203 11:37:18.152614  156311 certs.go:256] generating profile certs ...
	I0203 11:37:18.152725  156311 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/pause-225830/client.key
	I0203 11:37:18.152809  156311 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/pause-225830/apiserver.key.f7af692e
	I0203 11:37:18.152868  156311 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/pause-225830/proxy-client.key
	I0203 11:37:18.153035  156311 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem (1338 bytes)
	W0203 11:37:18.153081  156311 certs.go:480] ignoring /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606_empty.pem, impossibly tiny 0 bytes
	I0203 11:37:18.153096  156311 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 11:37:18.153130  156311 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem (1078 bytes)
	I0203 11:37:18.153165  156311 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem (1123 bytes)
	I0203 11:37:18.153199  156311 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem (1679 bytes)
	I0203 11:37:18.153263  156311 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:37:18.154062  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:37:18.317256  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 11:37:18.457846  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:37:18.527144  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 11:37:18.555702  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/pause-225830/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0203 11:37:18.589235  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/pause-225830/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 11:37:18.634938  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/pause-225830/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:37:18.669689  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/pause-225830/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 11:37:18.694586  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /usr/share/ca-certificates/1166062.pem (1708 bytes)
	I0203 11:37:18.725881  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:37:18.763961  156311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem --> /usr/share/ca-certificates/116606.pem (1338 bytes)
	I0203 11:37:18.793116  156311 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:37:18.823124  156311 ssh_runner.go:195] Run: openssl version
	I0203 11:37:18.828934  156311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116606.pem && ln -fs /usr/share/ca-certificates/116606.pem /etc/ssl/certs/116606.pem"
	I0203 11:37:18.841798  156311 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116606.pem
	I0203 11:37:18.846440  156311 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:41 /usr/share/ca-certificates/116606.pem
	I0203 11:37:18.846510  156311 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116606.pem
	I0203 11:37:18.852878  156311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116606.pem /etc/ssl/certs/51391683.0"
	I0203 11:37:18.865107  156311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1166062.pem && ln -fs /usr/share/ca-certificates/1166062.pem /etc/ssl/certs/1166062.pem"
	I0203 11:37:18.877864  156311 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1166062.pem
	I0203 11:37:18.882449  156311 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:41 /usr/share/ca-certificates/1166062.pem
	I0203 11:37:18.882521  156311 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1166062.pem
	I0203 11:37:18.888154  156311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1166062.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:37:18.898328  156311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:37:18.909305  156311 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:37:18.914341  156311 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:37:18.914417  156311 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:37:18.920118  156311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:37:18.929871  156311 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:37:18.935383  156311 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0203 11:37:18.941250  156311 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0203 11:37:18.948811  156311 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0203 11:37:18.955908  156311 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0203 11:37:18.961861  156311 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0203 11:37:18.969622  156311 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0203 11:37:18.975407  156311 kubeadm.go:392] StartCluster: {Name:pause-225830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-225830 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:37:18.975550  156311 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 11:37:18.975607  156311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:37:19.018478  156311 cri.go:89] found id: "b7c047a5dac045413b1ad2811a4ba417593062f0cdb65db990601268348d4314"
	I0203 11:37:19.018510  156311 cri.go:89] found id: "daf902d6dba36bf14210661352cd3021dd5fcd16b96a063ed598051e0a617649"
	I0203 11:37:19.018516  156311 cri.go:89] found id: "007966f5a46ef166a3437e1514f5fa72a61e3fcaccf6a52da9b4b5733ee56413"
	I0203 11:37:19.018520  156311 cri.go:89] found id: "855345ddf0f166578b934bd4cddff48a7ebd8557601ed52964f1912fa2baf710"
	I0203 11:37:19.018523  156311 cri.go:89] found id: "8e523037626abc0198797563e226397796c34590545edb23c9a9ca10f5491dcf"
	I0203 11:37:19.018526  156311 cri.go:89] found id: "6a09b5a725bf8b1a1c98c9815ae060b9e99837d0e115dd0fed9b3663faf5e5fb"
	I0203 11:37:19.018529  156311 cri.go:89] found id: "d9c48a93b0af201f627f9bf3dde4d511c32fdf361166caec1bfcfd601b97efa3"
	I0203 11:37:19.018531  156311 cri.go:89] found id: ""
	I0203 11:37:19.018581  156311 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-225830 -n pause-225830
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-225830 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-225830 logs -n 25: (1.503584991s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-927018 sudo                  | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo                  | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo                  | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo cat              | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo cat              | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo                  | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo                  | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo                  | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo find             | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo crio             | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-927018                       | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:35 UTC |
	| start   | -p kubernetes-upgrade-700839           | kubernetes-upgrade-700839 | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-178849                 | NoKubernetes-178849       | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:35 UTC |
	| start   | -p NoKubernetes-178849                 | NoKubernetes-178849       | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:35 UTC |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-191474              | running-upgrade-191474    | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:35 UTC |
	| start   | -p stopped-upgrade-574710              | minikube                  | jenkins | v1.26.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:36 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-178849 sudo            | NoKubernetes-178849       | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-178849                 | NoKubernetes-178849       | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:35 UTC |
	| start   | -p pause-225830 --memory=2048          | pause-225830              | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:37 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-574710 stop            | minikube                  | jenkins | v1.26.0 | 03 Feb 25 11:36 UTC | 03 Feb 25 11:36 UTC |
	| start   | -p stopped-upgrade-574710              | stopped-upgrade-574710    | jenkins | v1.35.0 | 03 Feb 25 11:36 UTC | 03 Feb 25 11:37 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-225830                        | pause-225830              | jenkins | v1.35.0 | 03 Feb 25 11:37 UTC | 03 Feb 25 11:37 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p cert-expiration-149645              | cert-expiration-149645    | jenkins | v1.35.0 | 03 Feb 25 11:37 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-574710              | stopped-upgrade-574710    | jenkins | v1.35.0 | 03 Feb 25 11:37 UTC | 03 Feb 25 11:37 UTC |
	| start   | -p auto-927018 --memory=3072           | auto-927018               | jenkins | v1.35.0 | 03 Feb 25 11:37 UTC |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 11:37:32
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 11:37:32.016525  156665 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:37:32.016662  156665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:37:32.016678  156665 out.go:358] Setting ErrFile to fd 2...
	I0203 11:37:32.016686  156665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:37:32.016875  156665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:37:32.017483  156665 out.go:352] Setting JSON to false
	I0203 11:37:32.018544  156665 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8394,"bootTime":1738574258,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 11:37:32.018663  156665 start.go:139] virtualization: kvm guest
	I0203 11:37:32.021103  156665 out.go:177] * [auto-927018] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 11:37:32.022444  156665 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:37:32.022472  156665 notify.go:220] Checking for updates...
	I0203 11:37:32.024853  156665 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:37:32.025957  156665 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:37:32.026984  156665 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:37:32.028233  156665 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 11:37:32.029296  156665 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:37:32.030929  156665 config.go:182] Loaded profile config "cert-expiration-149645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:37:32.031079  156665 config.go:182] Loaded profile config "kubernetes-upgrade-700839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0203 11:37:32.031273  156665 config.go:182] Loaded profile config "pause-225830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:37:32.031440  156665 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:37:32.083340  156665 out.go:177] * Using the kvm2 driver based on user configuration
	I0203 11:37:32.084615  156665 start.go:297] selected driver: kvm2
	I0203 11:37:32.084638  156665 start.go:901] validating driver "kvm2" against <nil>
	I0203 11:37:32.084655  156665 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:37:32.085738  156665 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:37:32.085837  156665 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 11:37:32.108173  156665 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 11:37:32.108242  156665 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 11:37:32.108620  156665 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:37:32.108666  156665 cni.go:84] Creating CNI manager for ""
	I0203 11:37:32.108725  156665 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:37:32.108739  156665 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0203 11:37:32.108803  156665 start.go:340] cluster config:
	{Name:auto-927018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:auto-927018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0203 11:37:32.108952  156665 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:37:32.110856  156665 out.go:177] * Starting "auto-927018" primary control-plane node in "auto-927018" cluster
	I0203 11:37:32.112096  156665 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:37:32.112147  156665 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0203 11:37:32.112163  156665 cache.go:56] Caching tarball of preloaded images
	I0203 11:37:32.112311  156665 preload.go:172] Found /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0203 11:37:32.112330  156665 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0203 11:37:32.112464  156665 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/config.json ...
	I0203 11:37:32.112490  156665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/config.json: {Name:mkb2d79093001f3266dcdbc74d86c42a8fb9c494 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:37:32.112679  156665 start.go:360] acquireMachinesLock for auto-927018: {Name:mk4d774b88f87fe0539ca3e30dd98aae8a4d5437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:37:32.112721  156665 start.go:364] duration metric: took 22.161µs to acquireMachinesLock for "auto-927018"
	I0203 11:37:32.112745  156665 start.go:93] Provisioning new machine with config: &{Name:auto-927018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:auto-927018 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 11:37:32.112833  156665 start.go:125] createHost starting for "" (driver="kvm2")
	I0203 11:37:29.522467  156445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/cert-expiration-149645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 11:37:29.578597  156445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/cert-expiration-149645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:37:29.659397  156445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/cert-expiration-149645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 11:37:29.698446  156445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /usr/share/ca-certificates/1166062.pem (1708 bytes)
	I0203 11:37:29.730202  156445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:37:29.756086  156445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem --> /usr/share/ca-certificates/116606.pem (1338 bytes)
	I0203 11:37:29.783165  156445 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:37:29.824092  156445 ssh_runner.go:195] Run: openssl version
	I0203 11:37:29.840702  156445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1166062.pem && ln -fs /usr/share/ca-certificates/1166062.pem /etc/ssl/certs/1166062.pem"
	I0203 11:37:29.856538  156445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1166062.pem
	I0203 11:37:29.872604  156445 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:41 /usr/share/ca-certificates/1166062.pem
	I0203 11:37:29.872656  156445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1166062.pem
	I0203 11:37:29.884410  156445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1166062.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:37:29.896408  156445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:37:29.914487  156445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:37:29.919507  156445 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:37:29.919560  156445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:37:29.929900  156445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:37:29.941045  156445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116606.pem && ln -fs /usr/share/ca-certificates/116606.pem /etc/ssl/certs/116606.pem"
	I0203 11:37:29.953806  156445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116606.pem
	I0203 11:37:29.958230  156445 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:41 /usr/share/ca-certificates/116606.pem
	I0203 11:37:29.958276  156445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116606.pem
	I0203 11:37:29.966201  156445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116606.pem /etc/ssl/certs/51391683.0"
	I0203 11:37:29.977208  156445 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:37:29.981850  156445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0203 11:37:29.992693  156445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0203 11:37:30.004570  156445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0203 11:37:30.013248  156445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0203 11:37:30.027172  156445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0203 11:37:30.035307  156445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0203 11:37:30.043108  156445 kubeadm.go:392] StartCluster: {Name:cert-expiration-149645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-149645 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.82 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:37:30.043210  156445 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 11:37:30.043285  156445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:37:30.128887  156445 cri.go:89] found id: "a9735631ad72e7972555896907caf58e0018c6d752780a8b264c29261477af9e"
	I0203 11:37:30.128900  156445 cri.go:89] found id: "d5f462ad69e1300e0343d2f2c65924ca97dad5c2b8daeb38ab40e68905786943"
	I0203 11:37:30.128904  156445 cri.go:89] found id: "fed5e7fcd83ca8b4f9ea1178618edb617e96aa729321c8b584e64e78f7c35a38"
	I0203 11:37:30.128907  156445 cri.go:89] found id: "62f074c4e3f3e08c44caa54623e1967cb650a14fd1694c19905b1e57663f1e46"
	I0203 11:37:30.128910  156445 cri.go:89] found id: "e1a0f129953d04730e5a42ca90580c44e3569c4545f31397bd70ab38e2f4e07d"
	I0203 11:37:30.128914  156445 cri.go:89] found id: "01fbb8c8262e22fa510f2f6c8887a6c86211a85fc355181cc4c5f21e28ec0338"
	I0203 11:37:30.128917  156445 cri.go:89] found id: "3176a9ff1bf2c7f569964d3c376f019a90e18e8e976fdf5bcb0c8c3714f1c11b"
	I0203 11:37:30.128920  156445 cri.go:89] found id: "c49cc51eedab598447745f8befed3837205ea6f7ce30652c3f841bb0c7e8bfad"
	I0203 11:37:30.128922  156445 cri.go:89] found id: "672f19824cb2eb7975e1823376f39ac3960ef89f294b33c47aab138c293ff7e4"
	I0203 11:37:30.128932  156445 cri.go:89] found id: "666f4875d3c41fa7ed8e5327be45ff3d4e2ec905fd6fe4af68b0c50c8ac0941c"
	I0203 11:37:30.128935  156445 cri.go:89] found id: "0200d9ab1e2fae1500faadbf1ebaff5a5eae704c76ed6d3f01cf56c70923905e"
	I0203 11:37:30.128938  156445 cri.go:89] found id: "e14d9cbd65606c925893b72d7ddad6d8b451c1bf6188b429aa6a6d89fb9b49eb"
	I0203 11:37:30.128941  156445 cri.go:89] found id: "b2d6301593d869a0234df22696d4092938a04d5b5bdc2e9b334ad14bfd3f3fa3"
	I0203 11:37:30.128946  156445 cri.go:89] found id: ""
	I0203 11:37:30.128994  156445 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-225830 -n pause-225830
helpers_test.go:261: (dbg) Run:  kubectl --context pause-225830 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-225830 -n pause-225830
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-225830 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-225830 logs -n 25: (1.693038649s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-927018 sudo                  | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo                  | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo                  | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo cat              | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo cat              | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo                  | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo                  | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo                  | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo find             | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-927018 sudo crio             | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-927018                       | cilium-927018             | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:35 UTC |
	| start   | -p kubernetes-upgrade-700839           | kubernetes-upgrade-700839 | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-178849                 | NoKubernetes-178849       | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:35 UTC |
	| start   | -p NoKubernetes-178849                 | NoKubernetes-178849       | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:35 UTC |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-191474              | running-upgrade-191474    | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:35 UTC |
	| start   | -p stopped-upgrade-574710              | minikube                  | jenkins | v1.26.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:36 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-178849 sudo            | NoKubernetes-178849       | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-178849                 | NoKubernetes-178849       | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:35 UTC |
	| start   | -p pause-225830 --memory=2048          | pause-225830              | jenkins | v1.35.0 | 03 Feb 25 11:35 UTC | 03 Feb 25 11:37 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-574710 stop            | minikube                  | jenkins | v1.26.0 | 03 Feb 25 11:36 UTC | 03 Feb 25 11:36 UTC |
	| start   | -p stopped-upgrade-574710              | stopped-upgrade-574710    | jenkins | v1.35.0 | 03 Feb 25 11:36 UTC | 03 Feb 25 11:37 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-225830                        | pause-225830              | jenkins | v1.35.0 | 03 Feb 25 11:37 UTC | 03 Feb 25 11:37 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p cert-expiration-149645              | cert-expiration-149645    | jenkins | v1.35.0 | 03 Feb 25 11:37 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-574710              | stopped-upgrade-574710    | jenkins | v1.35.0 | 03 Feb 25 11:37 UTC | 03 Feb 25 11:37 UTC |
	| start   | -p auto-927018 --memory=3072           | auto-927018               | jenkins | v1.35.0 | 03 Feb 25 11:37 UTC |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 11:37:32
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 11:37:32.016525  156665 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:37:32.016662  156665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:37:32.016678  156665 out.go:358] Setting ErrFile to fd 2...
	I0203 11:37:32.016686  156665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:37:32.016875  156665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:37:32.017483  156665 out.go:352] Setting JSON to false
	I0203 11:37:32.018544  156665 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8394,"bootTime":1738574258,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 11:37:32.018663  156665 start.go:139] virtualization: kvm guest
	I0203 11:37:32.021103  156665 out.go:177] * [auto-927018] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 11:37:32.022444  156665 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:37:32.022472  156665 notify.go:220] Checking for updates...
	I0203 11:37:32.024853  156665 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:37:32.025957  156665 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:37:32.026984  156665 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:37:32.028233  156665 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 11:37:32.029296  156665 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:37:32.030929  156665 config.go:182] Loaded profile config "cert-expiration-149645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:37:32.031079  156665 config.go:182] Loaded profile config "kubernetes-upgrade-700839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0203 11:37:32.031273  156665 config.go:182] Loaded profile config "pause-225830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:37:32.031440  156665 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:37:32.083340  156665 out.go:177] * Using the kvm2 driver based on user configuration
	I0203 11:37:32.084615  156665 start.go:297] selected driver: kvm2
	I0203 11:37:32.084638  156665 start.go:901] validating driver "kvm2" against <nil>
	I0203 11:37:32.084655  156665 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:37:32.085738  156665 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:37:32.085837  156665 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 11:37:32.108173  156665 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 11:37:32.108242  156665 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 11:37:32.108620  156665 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:37:32.108666  156665 cni.go:84] Creating CNI manager for ""
	I0203 11:37:32.108725  156665 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:37:32.108739  156665 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0203 11:37:32.108803  156665 start.go:340] cluster config:
	{Name:auto-927018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:auto-927018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0203 11:37:32.108952  156665 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:37:32.110856  156665 out.go:177] * Starting "auto-927018" primary control-plane node in "auto-927018" cluster
	I0203 11:37:32.112096  156665 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:37:32.112147  156665 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0203 11:37:32.112163  156665 cache.go:56] Caching tarball of preloaded images
	I0203 11:37:32.112311  156665 preload.go:172] Found /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0203 11:37:32.112330  156665 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0203 11:37:32.112464  156665 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/config.json ...
	I0203 11:37:32.112490  156665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/config.json: {Name:mkb2d79093001f3266dcdbc74d86c42a8fb9c494 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:37:32.112679  156665 start.go:360] acquireMachinesLock for auto-927018: {Name:mk4d774b88f87fe0539ca3e30dd98aae8a4d5437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:37:32.112721  156665 start.go:364] duration metric: took 22.161µs to acquireMachinesLock for "auto-927018"
	I0203 11:37:32.112745  156665 start.go:93] Provisioning new machine with config: &{Name:auto-927018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:auto-927018 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 11:37:32.112833  156665 start.go:125] createHost starting for "" (driver="kvm2")
	I0203 11:37:29.522467  156445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/cert-expiration-149645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 11:37:29.578597  156445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/cert-expiration-149645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:37:29.659397  156445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/cert-expiration-149645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 11:37:29.698446  156445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /usr/share/ca-certificates/1166062.pem (1708 bytes)
	I0203 11:37:29.730202  156445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:37:29.756086  156445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem --> /usr/share/ca-certificates/116606.pem (1338 bytes)
	I0203 11:37:29.783165  156445 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:37:29.824092  156445 ssh_runner.go:195] Run: openssl version
	I0203 11:37:29.840702  156445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1166062.pem && ln -fs /usr/share/ca-certificates/1166062.pem /etc/ssl/certs/1166062.pem"
	I0203 11:37:29.856538  156445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1166062.pem
	I0203 11:37:29.872604  156445 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:41 /usr/share/ca-certificates/1166062.pem
	I0203 11:37:29.872656  156445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1166062.pem
	I0203 11:37:29.884410  156445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1166062.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:37:29.896408  156445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:37:29.914487  156445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:37:29.919507  156445 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:37:29.919560  156445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:37:29.929900  156445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:37:29.941045  156445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116606.pem && ln -fs /usr/share/ca-certificates/116606.pem /etc/ssl/certs/116606.pem"
	I0203 11:37:29.953806  156445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116606.pem
	I0203 11:37:29.958230  156445 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:41 /usr/share/ca-certificates/116606.pem
	I0203 11:37:29.958276  156445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116606.pem
	I0203 11:37:29.966201  156445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116606.pem /etc/ssl/certs/51391683.0"
	I0203 11:37:29.977208  156445 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:37:29.981850  156445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0203 11:37:29.992693  156445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0203 11:37:30.004570  156445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0203 11:37:30.013248  156445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0203 11:37:30.027172  156445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0203 11:37:30.035307  156445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0203 11:37:30.043108  156445 kubeadm.go:392] StartCluster: {Name:cert-expiration-149645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-149645 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.82 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:37:30.043210  156445 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 11:37:30.043285  156445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:37:30.128887  156445 cri.go:89] found id: "a9735631ad72e7972555896907caf58e0018c6d752780a8b264c29261477af9e"
	I0203 11:37:30.128900  156445 cri.go:89] found id: "d5f462ad69e1300e0343d2f2c65924ca97dad5c2b8daeb38ab40e68905786943"
	I0203 11:37:30.128904  156445 cri.go:89] found id: "fed5e7fcd83ca8b4f9ea1178618edb617e96aa729321c8b584e64e78f7c35a38"
	I0203 11:37:30.128907  156445 cri.go:89] found id: "62f074c4e3f3e08c44caa54623e1967cb650a14fd1694c19905b1e57663f1e46"
	I0203 11:37:30.128910  156445 cri.go:89] found id: "e1a0f129953d04730e5a42ca90580c44e3569c4545f31397bd70ab38e2f4e07d"
	I0203 11:37:30.128914  156445 cri.go:89] found id: "01fbb8c8262e22fa510f2f6c8887a6c86211a85fc355181cc4c5f21e28ec0338"
	I0203 11:37:30.128917  156445 cri.go:89] found id: "3176a9ff1bf2c7f569964d3c376f019a90e18e8e976fdf5bcb0c8c3714f1c11b"
	I0203 11:37:30.128920  156445 cri.go:89] found id: "c49cc51eedab598447745f8befed3837205ea6f7ce30652c3f841bb0c7e8bfad"
	I0203 11:37:30.128922  156445 cri.go:89] found id: "672f19824cb2eb7975e1823376f39ac3960ef89f294b33c47aab138c293ff7e4"
	I0203 11:37:30.128932  156445 cri.go:89] found id: "666f4875d3c41fa7ed8e5327be45ff3d4e2ec905fd6fe4af68b0c50c8ac0941c"
	I0203 11:37:30.128935  156445 cri.go:89] found id: "0200d9ab1e2fae1500faadbf1ebaff5a5eae704c76ed6d3f01cf56c70923905e"
	I0203 11:37:30.128938  156445 cri.go:89] found id: "e14d9cbd65606c925893b72d7ddad6d8b451c1bf6188b429aa6a6d89fb9b49eb"
	I0203 11:37:30.128941  156445 cri.go:89] found id: "b2d6301593d869a0234df22696d4092938a04d5b5bdc2e9b334ad14bfd3f3fa3"
	I0203 11:37:30.128946  156445 cri.go:89] found id: ""
	I0203 11:37:30.128994  156445 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-225830 -n pause-225830
helpers_test.go:261: (dbg) Run:  kubectl --context pause-225830 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (52.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (272.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-517711 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-517711 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m32.240659169s)

                                                
                                                
-- stdout --
	* [old-k8s-version-517711] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-517711" primary control-plane node in "old-k8s-version-517711" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:41:26.865901  166532 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:41:26.866066  166532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:41:26.866091  166532 out.go:358] Setting ErrFile to fd 2...
	I0203 11:41:26.866095  166532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:41:26.866252  166532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:41:26.866823  166532 out.go:352] Setting JSON to false
	I0203 11:41:26.868077  166532 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8629,"bootTime":1738574258,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 11:41:26.868191  166532 start.go:139] virtualization: kvm guest
	I0203 11:41:26.870359  166532 out.go:177] * [old-k8s-version-517711] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 11:41:26.871528  166532 notify.go:220] Checking for updates...
	I0203 11:41:26.871551  166532 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:41:26.872770  166532 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:41:26.874106  166532 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:41:26.875191  166532 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:41:26.876242  166532 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 11:41:26.877308  166532 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:41:26.878824  166532 config.go:182] Loaded profile config "bridge-927018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:41:26.878926  166532 config.go:182] Loaded profile config "flannel-927018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:41:26.879006  166532 config.go:182] Loaded profile config "kubernetes-upgrade-700839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:41:26.879091  166532 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:41:26.915059  166532 out.go:177] * Using the kvm2 driver based on user configuration
	I0203 11:41:26.916220  166532 start.go:297] selected driver: kvm2
	I0203 11:41:26.916232  166532 start.go:901] validating driver "kvm2" against <nil>
	I0203 11:41:26.916259  166532 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:41:26.916946  166532 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:41:26.917045  166532 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 11:41:26.932544  166532 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 11:41:26.932607  166532 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 11:41:26.932891  166532 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:41:26.932926  166532 cni.go:84] Creating CNI manager for ""
	I0203 11:41:26.932986  166532 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:41:26.932998  166532 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0203 11:41:26.933057  166532 start.go:340] cluster config:
	{Name:old-k8s-version-517711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-517711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:41:26.933172  166532 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:41:26.934976  166532 out.go:177] * Starting "old-k8s-version-517711" primary control-plane node in "old-k8s-version-517711" cluster
	I0203 11:41:26.936291  166532 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 11:41:26.936338  166532 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0203 11:41:26.936357  166532 cache.go:56] Caching tarball of preloaded images
	I0203 11:41:26.936783  166532 preload.go:172] Found /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0203 11:41:26.936815  166532 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0203 11:41:26.936945  166532 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/config.json ...
	I0203 11:41:26.936977  166532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/config.json: {Name:mk4dcbba19913098aa7d6976ed46cfbb452fb29b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:41:26.937263  166532 start.go:360] acquireMachinesLock for old-k8s-version-517711: {Name:mk4d774b88f87fe0539ca3e30dd98aae8a4d5437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:41:28.590854  166532 start.go:364] duration metric: took 1.65352527s to acquireMachinesLock for "old-k8s-version-517711"
	I0203 11:41:28.590941  166532 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-517711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-517711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 11:41:28.591040  166532 start.go:125] createHost starting for "" (driver="kvm2")
	I0203 11:41:28.593182  166532 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0203 11:41:28.593400  166532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:41:28.593458  166532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:41:28.613338  166532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45871
	I0203 11:41:28.613844  166532 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:41:28.614480  166532 main.go:141] libmachine: Using API Version  1
	I0203 11:41:28.614503  166532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:41:28.614820  166532 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:41:28.615028  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetMachineName
	I0203 11:41:28.615166  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:41:28.615317  166532 start.go:159] libmachine.API.Create for "old-k8s-version-517711" (driver="kvm2")
	I0203 11:41:28.615340  166532 client.go:168] LocalClient.Create starting
	I0203 11:41:28.615379  166532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem
	I0203 11:41:28.615422  166532 main.go:141] libmachine: Decoding PEM data...
	I0203 11:41:28.615442  166532 main.go:141] libmachine: Parsing certificate...
	I0203 11:41:28.615525  166532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem
	I0203 11:41:28.615559  166532 main.go:141] libmachine: Decoding PEM data...
	I0203 11:41:28.615577  166532 main.go:141] libmachine: Parsing certificate...
	I0203 11:41:28.615603  166532 main.go:141] libmachine: Running pre-create checks...
	I0203 11:41:28.615616  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .PreCreateCheck
	I0203 11:41:28.615929  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetConfigRaw
	I0203 11:41:28.616293  166532 main.go:141] libmachine: Creating machine...
	I0203 11:41:28.616302  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .Create
	I0203 11:41:28.616496  166532 main.go:141] libmachine: (old-k8s-version-517711) creating KVM machine...
	I0203 11:41:28.616520  166532 main.go:141] libmachine: (old-k8s-version-517711) creating network...
	I0203 11:41:28.617909  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found existing default KVM network
	I0203 11:41:28.619432  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:28.619281  166585 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:75:e3:f1} reservation:<nil>}
	I0203 11:41:28.620358  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:28.620277  166585 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:96:e5:75} reservation:<nil>}
	I0203 11:41:28.621571  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:28.621495  166585 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030cc90}
	I0203 11:41:28.621619  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | created network xml: 
	I0203 11:41:28.621639  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | <network>
	I0203 11:41:28.621664  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |   <name>mk-old-k8s-version-517711</name>
	I0203 11:41:28.621674  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |   <dns enable='no'/>
	I0203 11:41:28.621682  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |   
	I0203 11:41:28.621705  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0203 11:41:28.621737  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |     <dhcp>
	I0203 11:41:28.621763  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0203 11:41:28.621816  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |     </dhcp>
	I0203 11:41:28.621849  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |   </ip>
	I0203 11:41:28.621864  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG |   
	I0203 11:41:28.621878  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | </network>
	I0203 11:41:28.621888  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | 
	I0203 11:41:28.627066  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | trying to create private KVM network mk-old-k8s-version-517711 192.168.61.0/24...
	I0203 11:41:28.710113  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | private KVM network mk-old-k8s-version-517711 192.168.61.0/24 created
	I0203 11:41:28.710154  166532 main.go:141] libmachine: (old-k8s-version-517711) setting up store path in /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711 ...
	I0203 11:41:28.710167  166532 main.go:141] libmachine: (old-k8s-version-517711) building disk image from file:///home/jenkins/minikube-integration/20354-109432/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0203 11:41:28.710185  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:28.710136  166585 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:41:28.710351  166532 main.go:141] libmachine: (old-k8s-version-517711) Downloading /home/jenkins/minikube-integration/20354-109432/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20354-109432/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0203 11:41:29.015223  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:29.014524  166585 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa...
	I0203 11:41:29.183178  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:29.182990  166585 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/old-k8s-version-517711.rawdisk...
	I0203 11:41:29.183218  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | Writing magic tar header
	I0203 11:41:29.183238  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | Writing SSH key tar header
	I0203 11:41:29.183251  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:29.183219  166585 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711 ...
	I0203 11:41:29.183416  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711
	I0203 11:41:29.183481  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432/.minikube/machines
	I0203 11:41:29.183501  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:41:29.183516  166532 main.go:141] libmachine: (old-k8s-version-517711) setting executable bit set on /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711 (perms=drwx------)
	I0203 11:41:29.183608  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20354-109432
	I0203 11:41:29.183676  166532 main.go:141] libmachine: (old-k8s-version-517711) setting executable bit set on /home/jenkins/minikube-integration/20354-109432/.minikube/machines (perms=drwxr-xr-x)
	I0203 11:41:29.183707  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0203 11:41:29.183732  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home/jenkins
	I0203 11:41:29.183758  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | checking permissions on dir: /home
	I0203 11:41:29.183776  166532 main.go:141] libmachine: (old-k8s-version-517711) setting executable bit set on /home/jenkins/minikube-integration/20354-109432/.minikube (perms=drwxr-xr-x)
	I0203 11:41:29.183798  166532 main.go:141] libmachine: (old-k8s-version-517711) setting executable bit set on /home/jenkins/minikube-integration/20354-109432 (perms=drwxrwxr-x)
	I0203 11:41:29.183807  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | skipping /home - not owner
	I0203 11:41:29.183836  166532 main.go:141] libmachine: (old-k8s-version-517711) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0203 11:41:29.183870  166532 main.go:141] libmachine: (old-k8s-version-517711) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0203 11:41:29.183897  166532 main.go:141] libmachine: (old-k8s-version-517711) creating domain...
	I0203 11:41:29.186489  166532 main.go:141] libmachine: (old-k8s-version-517711) define libvirt domain using xml: 
	I0203 11:41:29.186513  166532 main.go:141] libmachine: (old-k8s-version-517711) <domain type='kvm'>
	I0203 11:41:29.186524  166532 main.go:141] libmachine: (old-k8s-version-517711)   <name>old-k8s-version-517711</name>
	I0203 11:41:29.186531  166532 main.go:141] libmachine: (old-k8s-version-517711)   <memory unit='MiB'>2200</memory>
	I0203 11:41:29.186538  166532 main.go:141] libmachine: (old-k8s-version-517711)   <vcpu>2</vcpu>
	I0203 11:41:29.186545  166532 main.go:141] libmachine: (old-k8s-version-517711)   <features>
	I0203 11:41:29.186556  166532 main.go:141] libmachine: (old-k8s-version-517711)     <acpi/>
	I0203 11:41:29.186566  166532 main.go:141] libmachine: (old-k8s-version-517711)     <apic/>
	I0203 11:41:29.186574  166532 main.go:141] libmachine: (old-k8s-version-517711)     <pae/>
	I0203 11:41:29.186584  166532 main.go:141] libmachine: (old-k8s-version-517711)     
	I0203 11:41:29.186591  166532 main.go:141] libmachine: (old-k8s-version-517711)   </features>
	I0203 11:41:29.186601  166532 main.go:141] libmachine: (old-k8s-version-517711)   <cpu mode='host-passthrough'>
	I0203 11:41:29.186608  166532 main.go:141] libmachine: (old-k8s-version-517711)   
	I0203 11:41:29.186621  166532 main.go:141] libmachine: (old-k8s-version-517711)   </cpu>
	I0203 11:41:29.186629  166532 main.go:141] libmachine: (old-k8s-version-517711)   <os>
	I0203 11:41:29.186635  166532 main.go:141] libmachine: (old-k8s-version-517711)     <type>hvm</type>
	I0203 11:41:29.186643  166532 main.go:141] libmachine: (old-k8s-version-517711)     <boot dev='cdrom'/>
	I0203 11:41:29.186649  166532 main.go:141] libmachine: (old-k8s-version-517711)     <boot dev='hd'/>
	I0203 11:41:29.186657  166532 main.go:141] libmachine: (old-k8s-version-517711)     <bootmenu enable='no'/>
	I0203 11:41:29.186663  166532 main.go:141] libmachine: (old-k8s-version-517711)   </os>
	I0203 11:41:29.186670  166532 main.go:141] libmachine: (old-k8s-version-517711)   <devices>
	I0203 11:41:29.186677  166532 main.go:141] libmachine: (old-k8s-version-517711)     <disk type='file' device='cdrom'>
	I0203 11:41:29.186690  166532 main.go:141] libmachine: (old-k8s-version-517711)       <source file='/home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/boot2docker.iso'/>
	I0203 11:41:29.186703  166532 main.go:141] libmachine: (old-k8s-version-517711)       <target dev='hdc' bus='scsi'/>
	I0203 11:41:29.186710  166532 main.go:141] libmachine: (old-k8s-version-517711)       <readonly/>
	I0203 11:41:29.186723  166532 main.go:141] libmachine: (old-k8s-version-517711)     </disk>
	I0203 11:41:29.186734  166532 main.go:141] libmachine: (old-k8s-version-517711)     <disk type='file' device='disk'>
	I0203 11:41:29.186743  166532 main.go:141] libmachine: (old-k8s-version-517711)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0203 11:41:29.186762  166532 main.go:141] libmachine: (old-k8s-version-517711)       <source file='/home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/old-k8s-version-517711.rawdisk'/>
	I0203 11:41:29.186776  166532 main.go:141] libmachine: (old-k8s-version-517711)       <target dev='hda' bus='virtio'/>
	I0203 11:41:29.186784  166532 main.go:141] libmachine: (old-k8s-version-517711)     </disk>
	I0203 11:41:29.186795  166532 main.go:141] libmachine: (old-k8s-version-517711)     <interface type='network'>
	I0203 11:41:29.186803  166532 main.go:141] libmachine: (old-k8s-version-517711)       <source network='mk-old-k8s-version-517711'/>
	I0203 11:41:29.186810  166532 main.go:141] libmachine: (old-k8s-version-517711)       <model type='virtio'/>
	I0203 11:41:29.186818  166532 main.go:141] libmachine: (old-k8s-version-517711)     </interface>
	I0203 11:41:29.186829  166532 main.go:141] libmachine: (old-k8s-version-517711)     <interface type='network'>
	I0203 11:41:29.186837  166532 main.go:141] libmachine: (old-k8s-version-517711)       <source network='default'/>
	I0203 11:41:29.186845  166532 main.go:141] libmachine: (old-k8s-version-517711)       <model type='virtio'/>
	I0203 11:41:29.186852  166532 main.go:141] libmachine: (old-k8s-version-517711)     </interface>
	I0203 11:41:29.186862  166532 main.go:141] libmachine: (old-k8s-version-517711)     <serial type='pty'>
	I0203 11:41:29.186870  166532 main.go:141] libmachine: (old-k8s-version-517711)       <target port='0'/>
	I0203 11:41:29.186879  166532 main.go:141] libmachine: (old-k8s-version-517711)     </serial>
	I0203 11:41:29.186887  166532 main.go:141] libmachine: (old-k8s-version-517711)     <console type='pty'>
	I0203 11:41:29.186894  166532 main.go:141] libmachine: (old-k8s-version-517711)       <target type='serial' port='0'/>
	I0203 11:41:29.186906  166532 main.go:141] libmachine: (old-k8s-version-517711)     </console>
	I0203 11:41:29.186912  166532 main.go:141] libmachine: (old-k8s-version-517711)     <rng model='virtio'>
	I0203 11:41:29.186924  166532 main.go:141] libmachine: (old-k8s-version-517711)       <backend model='random'>/dev/random</backend>
	I0203 11:41:29.186937  166532 main.go:141] libmachine: (old-k8s-version-517711)     </rng>
	I0203 11:41:29.186944  166532 main.go:141] libmachine: (old-k8s-version-517711)     
	I0203 11:41:29.186954  166532 main.go:141] libmachine: (old-k8s-version-517711)     
	I0203 11:41:29.186961  166532 main.go:141] libmachine: (old-k8s-version-517711)   </devices>
	I0203 11:41:29.186967  166532 main.go:141] libmachine: (old-k8s-version-517711) </domain>
	I0203 11:41:29.186978  166532 main.go:141] libmachine: (old-k8s-version-517711) 
	I0203 11:41:29.192539  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:6c:98:20 in network default
	I0203 11:41:29.193289  166532 main.go:141] libmachine: (old-k8s-version-517711) starting domain...
	I0203 11:41:29.193323  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:29.193332  166532 main.go:141] libmachine: (old-k8s-version-517711) ensuring networks are active...
	I0203 11:41:29.194158  166532 main.go:141] libmachine: (old-k8s-version-517711) Ensuring network default is active
	I0203 11:41:29.194566  166532 main.go:141] libmachine: (old-k8s-version-517711) Ensuring network mk-old-k8s-version-517711 is active
	I0203 11:41:29.195252  166532 main.go:141] libmachine: (old-k8s-version-517711) getting domain XML...
	I0203 11:41:29.196192  166532 main.go:141] libmachine: (old-k8s-version-517711) creating domain...
	I0203 11:41:30.595230  166532 main.go:141] libmachine: (old-k8s-version-517711) waiting for IP...
	I0203 11:41:30.596211  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:30.596769  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:30.596848  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:30.596797  166585 retry.go:31] will retry after 188.638955ms: waiting for domain to come up
	I0203 11:41:30.787921  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:30.788690  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:30.788725  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:30.788648  166585 retry.go:31] will retry after 299.90555ms: waiting for domain to come up
	I0203 11:41:31.090527  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:31.091045  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:31.091072  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:31.090995  166585 retry.go:31] will retry after 395.922052ms: waiting for domain to come up
	I0203 11:41:31.488852  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:31.489818  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:31.489876  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:31.489800  166585 retry.go:31] will retry after 578.898423ms: waiting for domain to come up
	I0203 11:41:32.070941  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:32.071608  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:32.071631  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:32.071574  166585 retry.go:31] will retry after 706.5192ms: waiting for domain to come up
	I0203 11:41:32.780456  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:32.781091  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:32.781123  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:32.781057  166585 retry.go:31] will retry after 804.047535ms: waiting for domain to come up
	I0203 11:41:33.587298  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:33.587844  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:33.587875  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:33.587807  166585 retry.go:31] will retry after 912.319933ms: waiting for domain to come up
	I0203 11:41:34.501523  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:34.502119  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:34.502147  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:34.502104  166585 retry.go:31] will retry after 1.13391392s: waiting for domain to come up
	I0203 11:41:35.637314  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:35.637847  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:35.637912  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:35.637837  166585 retry.go:31] will retry after 1.13199998s: waiting for domain to come up
	I0203 11:41:36.771306  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:36.771759  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:36.771814  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:36.771739  166585 retry.go:31] will retry after 1.632808893s: waiting for domain to come up
	I0203 11:41:38.405840  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:38.406394  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:38.406424  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:38.406354  166585 retry.go:31] will retry after 2.223756189s: waiting for domain to come up
	I0203 11:41:40.632375  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:40.633012  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:40.633048  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:40.632945  166585 retry.go:31] will retry after 2.448781389s: waiting for domain to come up
	I0203 11:41:43.083253  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:43.083826  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:43.083849  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:43.083793  166585 retry.go:31] will retry after 3.15170325s: waiting for domain to come up
	I0203 11:41:46.239090  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:46.239554  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:41:46.239600  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:41:46.239527  166585 retry.go:31] will retry after 4.456059482s: waiting for domain to come up
	I0203 11:41:50.697566  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:50.698116  166532 main.go:141] libmachine: (old-k8s-version-517711) found domain IP: 192.168.61.203
	I0203 11:41:50.698153  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has current primary IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:50.698163  166532 main.go:141] libmachine: (old-k8s-version-517711) reserving static IP address...
	I0203 11:41:50.698536  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-517711", mac: "52:54:00:e5:0b:11", ip: "192.168.61.203"} in network mk-old-k8s-version-517711
	I0203 11:41:50.787671  166532 main.go:141] libmachine: (old-k8s-version-517711) reserved static IP address 192.168.61.203 for domain old-k8s-version-517711
	I0203 11:41:50.787702  166532 main.go:141] libmachine: (old-k8s-version-517711) waiting for SSH...
	I0203 11:41:50.787713  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | Getting to WaitForSSH function...
	I0203 11:41:50.791019  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:50.791593  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:50.791620  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:50.792115  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | Using SSH client type: external
	I0203 11:41:50.792145  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | Using SSH private key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa (-rw-------)
	I0203 11:41:50.792184  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0203 11:41:50.792203  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | About to run SSH command:
	I0203 11:41:50.792216  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | exit 0
	I0203 11:41:50.935020  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | SSH cmd err, output: <nil>: 
	I0203 11:41:50.935352  166532 main.go:141] libmachine: (old-k8s-version-517711) KVM machine creation complete
	I0203 11:41:50.935663  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetConfigRaw
	I0203 11:41:50.936278  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:41:50.936508  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:41:50.936670  166532 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0203 11:41:50.936687  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetState
	I0203 11:41:50.938468  166532 main.go:141] libmachine: Detecting operating system of created instance...
	I0203 11:41:50.938486  166532 main.go:141] libmachine: Waiting for SSH to be available...
	I0203 11:41:50.938494  166532 main.go:141] libmachine: Getting to WaitForSSH function...
	I0203 11:41:50.938502  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:41:50.941576  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:50.942081  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:50.942114  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:50.942291  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:41:50.942474  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:50.942654  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:50.942850  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:41:50.943044  166532 main.go:141] libmachine: Using SSH client type: native
	I0203 11:41:50.943394  166532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0203 11:41:50.943415  166532 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0203 11:41:51.057468  166532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:41:51.057500  166532 main.go:141] libmachine: Detecting the provisioner...
	I0203 11:41:51.057511  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:41:51.061168  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.061614  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:51.061645  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.061816  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:41:51.062035  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:51.062225  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:51.062426  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:41:51.062635  166532 main.go:141] libmachine: Using SSH client type: native
	I0203 11:41:51.062864  166532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0203 11:41:51.062878  166532 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0203 11:41:51.175950  166532 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0203 11:41:51.176027  166532 main.go:141] libmachine: found compatible host: buildroot
	I0203 11:41:51.176039  166532 main.go:141] libmachine: Provisioning with buildroot...
	I0203 11:41:51.176050  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetMachineName
	I0203 11:41:51.176327  166532 buildroot.go:166] provisioning hostname "old-k8s-version-517711"
	I0203 11:41:51.176353  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetMachineName
	I0203 11:41:51.176552  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:41:51.180114  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.180725  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:51.180763  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.181131  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:41:51.181378  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:51.181559  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:51.181729  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:41:51.181892  166532 main.go:141] libmachine: Using SSH client type: native
	I0203 11:41:51.182125  166532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0203 11:41:51.182145  166532 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-517711 && echo "old-k8s-version-517711" | sudo tee /etc/hostname
	I0203 11:41:51.325226  166532 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-517711
	
	I0203 11:41:51.325271  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:41:51.333412  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.333973  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:51.334017  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.334398  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:41:51.334576  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:51.334715  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:51.334841  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:41:51.335087  166532 main.go:141] libmachine: Using SSH client type: native
	I0203 11:41:51.335340  166532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0203 11:41:51.335362  166532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-517711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-517711/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-517711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:41:51.492919  166532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:41:51.492951  166532 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20354-109432/.minikube CaCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20354-109432/.minikube}
	I0203 11:41:51.492973  166532 buildroot.go:174] setting up certificates
	I0203 11:41:51.492986  166532 provision.go:84] configureAuth start
	I0203 11:41:51.492997  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetMachineName
	I0203 11:41:51.493394  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetIP
	I0203 11:41:51.497618  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.498261  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:51.498286  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.498525  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:41:51.502113  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.502658  166532 provision.go:143] copyHostCerts
	I0203 11:41:51.502720  166532 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem, removing ...
	I0203 11:41:51.502738  166532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem
	I0203 11:41:51.502817  166532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem (1078 bytes)
	I0203 11:41:51.502955  166532 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem, removing ...
	I0203 11:41:51.502963  166532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem
	I0203 11:41:51.502991  166532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem (1123 bytes)
	I0203 11:41:51.503058  166532 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem, removing ...
	I0203 11:41:51.503064  166532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem
	I0203 11:41:51.503087  166532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem (1679 bytes)
	I0203 11:41:51.503146  166532 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-517711 san=[127.0.0.1 192.168.61.203 localhost minikube old-k8s-version-517711]
	I0203 11:41:51.506104  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:51.506130  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.611916  166532 provision.go:177] copyRemoteCerts
	I0203 11:41:51.612046  166532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:41:51.612104  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:41:51.615736  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.616487  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:51.616547  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.617136  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:41:51.617333  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:51.617434  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:41:51.617586  166532 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa Username:docker}
	I0203 11:41:51.720953  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 11:41:51.758701  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0203 11:41:51.788747  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 11:41:51.821617  166532 provision.go:87] duration metric: took 328.614256ms to configureAuth
	I0203 11:41:51.821653  166532 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:41:51.821903  166532 config.go:182] Loaded profile config "old-k8s-version-517711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0203 11:41:51.822038  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:41:51.825460  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.825990  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:51.826055  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:51.826310  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:41:51.826519  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:51.826675  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:51.826836  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:41:51.827067  166532 main.go:141] libmachine: Using SSH client type: native
	I0203 11:41:51.827295  166532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0203 11:41:51.827320  166532 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0203 11:41:52.213361  166532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0203 11:41:52.213393  166532 main.go:141] libmachine: Checking connection to Docker...
	I0203 11:41:52.213405  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetURL
	I0203 11:41:52.214656  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | using libvirt version 6000000
	I0203 11:41:52.217196  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.217603  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:52.217628  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.217817  166532 main.go:141] libmachine: Docker is up and running!
	I0203 11:41:52.217832  166532 main.go:141] libmachine: Reticulating splines...
	I0203 11:41:52.217841  166532 client.go:171] duration metric: took 23.602489665s to LocalClient.Create
	I0203 11:41:52.217869  166532 start.go:167] duration metric: took 23.602554071s to libmachine.API.Create "old-k8s-version-517711"
	I0203 11:41:52.217883  166532 start.go:293] postStartSetup for "old-k8s-version-517711" (driver="kvm2")
	I0203 11:41:52.217899  166532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:41:52.217924  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:41:52.218172  166532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:41:52.218205  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:41:52.220785  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.221147  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:52.221179  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.221331  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:41:52.221547  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:52.221722  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:41:52.221890  166532 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa Username:docker}
	I0203 11:41:52.304589  166532 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:41:52.309039  166532 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:41:52.309073  166532 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/addons for local assets ...
	I0203 11:41:52.309158  166532 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/files for local assets ...
	I0203 11:41:52.309286  166532 filesync.go:149] local asset: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem -> 1166062.pem in /etc/ssl/certs
	I0203 11:41:52.309417  166532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:41:52.322326  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:41:52.352314  166532 start.go:296] duration metric: took 134.411542ms for postStartSetup
	I0203 11:41:52.352373  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetConfigRaw
	I0203 11:41:52.353063  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetIP
	I0203 11:41:52.356044  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.356419  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:52.356452  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.356710  166532 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/config.json ...
	I0203 11:41:52.356961  166532 start.go:128] duration metric: took 23.765906676s to createHost
	I0203 11:41:52.356997  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:41:52.359267  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.359588  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:52.359616  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.359783  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:41:52.359947  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:52.360140  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:52.360309  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:41:52.360483  166532 main.go:141] libmachine: Using SSH client type: native
	I0203 11:41:52.360702  166532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0203 11:41:52.360714  166532 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:41:52.466526  166532 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738582912.434953751
	
	I0203 11:41:52.466555  166532 fix.go:216] guest clock: 1738582912.434953751
	I0203 11:41:52.466562  166532 fix.go:229] Guest: 2025-02-03 11:41:52.434953751 +0000 UTC Remote: 2025-02-03 11:41:52.356980595 +0000 UTC m=+25.529203955 (delta=77.973156ms)
	I0203 11:41:52.466619  166532 fix.go:200] guest clock delta is within tolerance: 77.973156ms
	I0203 11:41:52.466627  166532 start.go:83] releasing machines lock for "old-k8s-version-517711", held for 23.875727284s
	I0203 11:41:52.466660  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:41:52.466943  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetIP
	I0203 11:41:52.469890  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.470275  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:52.470307  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.470511  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:41:52.470984  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:41:52.471191  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:41:52.471272  166532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 11:41:52.471320  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:41:52.471424  166532 ssh_runner.go:195] Run: cat /version.json
	I0203 11:41:52.471453  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:41:52.474250  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.474540  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.474633  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:52.474665  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.474822  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:41:52.474941  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:52.474987  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:52.474993  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:52.475113  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:41:52.475158  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:41:52.475311  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:41:52.475308  166532 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa Username:docker}
	I0203 11:41:52.475468  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:41:52.475614  166532 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa Username:docker}
	I0203 11:41:52.582776  166532 ssh_runner.go:195] Run: systemctl --version
	I0203 11:41:52.589179  166532 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0203 11:41:52.751534  166532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 11:41:52.757419  166532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:41:52.757502  166532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:41:52.773261  166532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:41:52.773286  166532 start.go:495] detecting cgroup driver to use...
	I0203 11:41:52.773346  166532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:41:52.794698  166532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:41:52.809443  166532 docker.go:217] disabling cri-docker service (if available) ...
	I0203 11:41:52.809503  166532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0203 11:41:52.825339  166532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0203 11:41:52.839131  166532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0203 11:41:52.962949  166532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0203 11:41:53.122020  166532 docker.go:233] disabling docker service ...
	I0203 11:41:53.122100  166532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0203 11:41:53.138614  166532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0203 11:41:53.152548  166532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0203 11:41:53.308012  166532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0203 11:41:53.430822  166532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0203 11:41:53.445542  166532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:41:53.463493  166532 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0203 11:41:53.463573  166532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:41:53.473516  166532 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0203 11:41:53.473578  166532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:41:53.483021  166532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:41:53.493158  166532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:41:53.503758  166532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:41:53.513631  166532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:41:53.522154  166532 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:41:53.522212  166532 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:41:53.534949  166532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:41:53.545018  166532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:41:53.663201  166532 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0203 11:41:53.759899  166532 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0203 11:41:53.759984  166532 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0203 11:41:53.764700  166532 start.go:563] Will wait 60s for crictl version
	I0203 11:41:53.764769  166532 ssh_runner.go:195] Run: which crictl
	I0203 11:41:53.768852  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:41:53.814326  166532 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0203 11:41:53.814415  166532 ssh_runner.go:195] Run: crio --version
	I0203 11:41:53.842121  166532 ssh_runner.go:195] Run: crio --version
	I0203 11:41:53.877425  166532 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0203 11:41:53.878483  166532 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetIP
	I0203 11:41:53.882158  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:53.882663  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:41:45 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:41:53.882702  166532 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:41:53.883024  166532 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0203 11:41:53.887263  166532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:41:53.901959  166532 kubeadm.go:883] updating cluster {Name:old-k8s-version-517711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-517711 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 11:41:53.902093  166532 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 11:41:53.902145  166532 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:41:53.938784  166532 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0203 11:41:53.938854  166532 ssh_runner.go:195] Run: which lz4
	I0203 11:41:53.942921  166532 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0203 11:41:53.947002  166532 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 11:41:53.947035  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0203 11:41:55.486187  166532 crio.go:462] duration metric: took 1.543289845s to copy over tarball
	I0203 11:41:55.486279  166532 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0203 11:41:58.167026  166532 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.680719666s)
	I0203 11:41:58.167055  166532 crio.go:469] duration metric: took 2.680831152s to extract the tarball
	I0203 11:41:58.167063  166532 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0203 11:41:58.212320  166532 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:41:58.258405  166532 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0203 11:41:58.258446  166532 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0203 11:41:58.258574  166532 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:41:58.258625  166532 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0203 11:41:58.258627  166532 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:41:58.258586  166532 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:41:58.258683  166532 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0203 11:41:58.258718  166532 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:41:58.258602  166532 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:41:58.258762  166532 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0203 11:41:58.260587  166532 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0203 11:41:58.260598  166532 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:41:58.260617  166532 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:41:58.260619  166532 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:41:58.260587  166532 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0203 11:41:58.260707  166532 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0203 11:41:58.260704  166532 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:41:58.260999  166532 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:41:58.421589  166532 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0203 11:41:58.446311  166532 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0203 11:41:58.456462  166532 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:41:58.456640  166532 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:41:58.464451  166532 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0203 11:41:58.471399  166532 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:41:58.475976  166532 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:41:58.479998  166532 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0203 11:41:58.480048  166532 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0203 11:41:58.480088  166532 ssh_runner.go:195] Run: which crictl
	I0203 11:41:58.534991  166532 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0203 11:41:58.535056  166532 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0203 11:41:58.535115  166532 ssh_runner.go:195] Run: which crictl
	I0203 11:41:58.589082  166532 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0203 11:41:58.589135  166532 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:41:58.589197  166532 ssh_runner.go:195] Run: which crictl
	I0203 11:41:58.589268  166532 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0203 11:41:58.589343  166532 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:41:58.589383  166532 ssh_runner.go:195] Run: which crictl
	I0203 11:41:58.622330  166532 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0203 11:41:58.622381  166532 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0203 11:41:58.622420  166532 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0203 11:41:58.622446  166532 ssh_runner.go:195] Run: which crictl
	I0203 11:41:58.622445  166532 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:41:58.622473  166532 ssh_runner.go:195] Run: which crictl
	I0203 11:41:58.622498  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0203 11:41:58.622525  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0203 11:41:58.622541  166532 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0203 11:41:58.622563  166532 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:41:58.622587  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:41:58.622605  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:41:58.622591  166532 ssh_runner.go:195] Run: which crictl
	I0203 11:41:58.639649  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0203 11:41:58.731223  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0203 11:41:58.731288  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:41:58.731376  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:41:58.731286  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:41:58.731430  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:41:58.731489  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0203 11:41:58.754801  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0203 11:41:58.888894  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0203 11:41:58.888929  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:41:58.888990  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:41:58.889033  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:41:58.889108  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:41:58.889188  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0203 11:41:58.902792  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0203 11:41:59.037661  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:41:59.037678  166532 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0203 11:41:59.037721  166532 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0203 11:41:59.043761  166532 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:41:59.043822  166532 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0203 11:41:59.043872  166532 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0203 11:41:59.043891  166532 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0203 11:41:59.097064  166532 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0203 11:41:59.101206  166532 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0203 11:41:59.492709  166532 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:41:59.662626  166532 cache_images.go:92] duration metric: took 1.404155022s to LoadCachedImages
	W0203 11:41:59.662734  166532 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0203 11:41:59.662754  166532 kubeadm.go:934] updating node { 192.168.61.203 8443 v1.20.0 crio true true} ...
	I0203 11:41:59.662884  166532 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-517711 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-517711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:41:59.662979  166532 ssh_runner.go:195] Run: crio config
	I0203 11:41:59.712751  166532 cni.go:84] Creating CNI manager for ""
	I0203 11:41:59.712778  166532 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:41:59.712793  166532 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 11:41:59.712821  166532 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.203 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-517711 NodeName:old-k8s-version-517711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0203 11:41:59.713015  166532 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-517711"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 11:41:59.713093  166532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0203 11:41:59.723617  166532 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 11:41:59.723709  166532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 11:41:59.733930  166532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0203 11:41:59.752923  166532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:41:59.770781  166532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0203 11:41:59.793895  166532 ssh_runner.go:195] Run: grep 192.168.61.203	control-plane.minikube.internal$ /etc/hosts
	I0203 11:41:59.798059  166532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:41:59.811265  166532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:41:59.974492  166532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:41:59.998428  166532 certs.go:68] Setting up /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711 for IP: 192.168.61.203
	I0203 11:41:59.998461  166532 certs.go:194] generating shared ca certs ...
	I0203 11:41:59.998485  166532 certs.go:226] acquiring lock for ca certs: {Name:mkceafe81f89678b7cbc2a7f6faab4e784fcb207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:41:59.998655  166532 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key
	I0203 11:41:59.998702  166532 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key
	I0203 11:41:59.998716  166532 certs.go:256] generating profile certs ...
	I0203 11:41:59.998789  166532 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/client.key
	I0203 11:41:59.998811  166532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/client.crt with IP's: []
	I0203 11:42:00.447099  166532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/client.crt ...
	I0203 11:42:00.447138  166532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/client.crt: {Name:mkb8972667b8b5346ebf6b697ad51681d7b93e7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:42:00.447340  166532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/client.key ...
	I0203 11:42:00.447359  166532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/client.key: {Name:mkb4be1a0e91ff983dc683821c253276e45f44d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:42:00.447468  166532 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.key.067e8325
	I0203 11:42:00.447490  166532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.crt.067e8325 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.203]
	I0203 11:42:00.510458  166532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.crt.067e8325 ...
	I0203 11:42:00.510496  166532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.crt.067e8325: {Name:mk94fadf90d0772eedcc3935aef8367adbfc4ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:42:00.510685  166532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.key.067e8325 ...
	I0203 11:42:00.510703  166532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.key.067e8325: {Name:mk390f247a4ba4d21e1fdc8453b9f9d52d8bfe50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:42:00.510800  166532 certs.go:381] copying /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.crt.067e8325 -> /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.crt
	I0203 11:42:00.510907  166532 certs.go:385] copying /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.key.067e8325 -> /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.key
	I0203 11:42:00.510994  166532 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/proxy-client.key
	I0203 11:42:00.511016  166532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/proxy-client.crt with IP's: []
	I0203 11:42:00.602036  166532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/proxy-client.crt ...
	I0203 11:42:00.602069  166532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/proxy-client.crt: {Name:mk35a3cbbb7f99c000adb2422a6fbcae74bc048a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:42:00.602232  166532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/proxy-client.key ...
	I0203 11:42:00.602245  166532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/proxy-client.key: {Name:mkfeffeb123adb8ecdbfd97892e72edd94714b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:42:00.602413  166532 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem (1338 bytes)
	W0203 11:42:00.602451  166532 certs.go:480] ignoring /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606_empty.pem, impossibly tiny 0 bytes
	I0203 11:42:00.602462  166532 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 11:42:00.602485  166532 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem (1078 bytes)
	I0203 11:42:00.602506  166532 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem (1123 bytes)
	I0203 11:42:00.602527  166532 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem (1679 bytes)
	I0203 11:42:00.602561  166532 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:42:00.603184  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:42:00.629396  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 11:42:00.654522  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:42:00.678064  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 11:42:00.703455  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0203 11:42:00.728762  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 11:42:00.818933  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:42:00.845001  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 11:42:00.873400  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /usr/share/ca-certificates/1166062.pem (1708 bytes)
	I0203 11:42:00.921785  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:42:00.963397  166532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem --> /usr/share/ca-certificates/116606.pem (1338 bytes)
	I0203 11:42:01.000564  166532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:42:01.019439  166532 ssh_runner.go:195] Run: openssl version
	I0203 11:42:01.025331  166532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1166062.pem && ln -fs /usr/share/ca-certificates/1166062.pem /etc/ssl/certs/1166062.pem"
	I0203 11:42:01.036697  166532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1166062.pem
	I0203 11:42:01.041636  166532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:41 /usr/share/ca-certificates/1166062.pem
	I0203 11:42:01.041707  166532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1166062.pem
	I0203 11:42:01.047834  166532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1166062.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:42:01.059585  166532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:42:01.071368  166532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:42:01.076267  166532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:42:01.076331  166532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:42:01.082319  166532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:42:01.093970  166532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116606.pem && ln -fs /usr/share/ca-certificates/116606.pem /etc/ssl/certs/116606.pem"
	I0203 11:42:01.104884  166532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116606.pem
	I0203 11:42:01.109552  166532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:41 /usr/share/ca-certificates/116606.pem
	I0203 11:42:01.109620  166532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116606.pem
	I0203 11:42:01.115301  166532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116606.pem /etc/ssl/certs/51391683.0"
	I0203 11:42:01.129627  166532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:42:01.134637  166532 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 11:42:01.134710  166532 kubeadm.go:392] StartCluster: {Name:old-k8s-version-517711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-517711 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:42:01.134819  166532 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 11:42:01.134875  166532 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:42:01.177905  166532 cri.go:89] found id: ""
	I0203 11:42:01.178024  166532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 11:42:01.189625  166532 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:42:01.200103  166532 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:42:01.210054  166532 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:42:01.210079  166532 kubeadm.go:157] found existing configuration files:
	
	I0203 11:42:01.210140  166532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:42:01.219210  166532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:42:01.219283  166532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:42:01.228548  166532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:42:01.237663  166532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:42:01.237744  166532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:42:01.248155  166532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:42:01.257395  166532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:42:01.257476  166532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:42:01.267554  166532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:42:01.277100  166532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:42:01.277186  166532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:42:01.288422  166532 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:42:01.548520  166532 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:44:00.064397  166532 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 11:44:00.064508  166532 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0203 11:44:00.066106  166532 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0203 11:44:00.066183  166532 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:44:00.066285  166532 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:44:00.066371  166532 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:44:00.066451  166532 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 11:44:00.066509  166532 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:44:00.068502  166532 out.go:235]   - Generating certificates and keys ...
	I0203 11:44:00.068599  166532 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:44:00.068666  166532 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:44:00.068729  166532 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 11:44:00.068775  166532 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0203 11:44:00.068823  166532 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0203 11:44:00.068865  166532 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0203 11:44:00.068908  166532 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0203 11:44:00.069015  166532 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-517711] and IPs [192.168.61.203 127.0.0.1 ::1]
	I0203 11:44:00.069073  166532 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0203 11:44:00.069234  166532 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-517711] and IPs [192.168.61.203 127.0.0.1 ::1]
	I0203 11:44:00.069340  166532 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 11:44:00.069428  166532 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 11:44:00.069487  166532 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0203 11:44:00.069560  166532 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:44:00.069604  166532 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:44:00.069649  166532 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:44:00.069703  166532 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:44:00.069771  166532 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:44:00.069883  166532 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:44:00.069969  166532 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:44:00.070059  166532 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:44:00.070222  166532 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:44:00.072632  166532 out.go:235]   - Booting up control plane ...
	I0203 11:44:00.072719  166532 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:44:00.072802  166532 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:44:00.072869  166532 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:44:00.072938  166532 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:44:00.073074  166532 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 11:44:00.073137  166532 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0203 11:44:00.073221  166532 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:44:00.073384  166532 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:44:00.073447  166532 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:44:00.073658  166532 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:44:00.073719  166532 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:44:00.073908  166532 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:44:00.073973  166532 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:44:00.074154  166532 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:44:00.074216  166532 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:44:00.074408  166532 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:44:00.074426  166532 kubeadm.go:310] 
	I0203 11:44:00.074513  166532 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0203 11:44:00.074582  166532 kubeadm.go:310] 		timed out waiting for the condition
	I0203 11:44:00.074592  166532 kubeadm.go:310] 
	I0203 11:44:00.074647  166532 kubeadm.go:310] 	This error is likely caused by:
	I0203 11:44:00.074681  166532 kubeadm.go:310] 		- The kubelet is not running
	I0203 11:44:00.074783  166532 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 11:44:00.074793  166532 kubeadm.go:310] 
	I0203 11:44:00.074888  166532 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 11:44:00.074942  166532 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0203 11:44:00.074993  166532 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0203 11:44:00.075003  166532 kubeadm.go:310] 
	I0203 11:44:00.075143  166532 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 11:44:00.075222  166532 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 11:44:00.075229  166532 kubeadm.go:310] 
	I0203 11:44:00.075351  166532 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0203 11:44:00.075462  166532 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0203 11:44:00.075534  166532 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0203 11:44:00.075597  166532 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0203 11:44:00.075666  166532 kubeadm.go:310] 
	W0203 11:44:00.075752  166532 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-517711] and IPs [192.168.61.203 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-517711] and IPs [192.168.61.203 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-517711] and IPs [192.168.61.203 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-517711] and IPs [192.168.61.203 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0203 11:44:00.075793  166532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0203 11:44:01.385479  166532 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.309659311s)
	I0203 11:44:01.385602  166532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:44:01.399884  166532 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:44:01.409959  166532 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:44:01.409980  166532 kubeadm.go:157] found existing configuration files:
	
	I0203 11:44:01.410046  166532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:44:01.420154  166532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:44:01.420228  166532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:44:01.430498  166532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:44:01.440042  166532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:44:01.440107  166532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:44:01.450058  166532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:44:01.458822  166532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:44:01.458880  166532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:44:01.469384  166532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:44:01.481090  166532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:44:01.481164  166532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:44:01.493966  166532 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:44:01.584433  166532 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0203 11:44:01.584507  166532 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:44:01.734344  166532 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:44:01.734512  166532 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:44:01.734643  166532 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 11:44:01.940530  166532 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:44:01.942467  166532 out.go:235]   - Generating certificates and keys ...
	I0203 11:44:01.942572  166532 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:44:01.942677  166532 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:44:01.942807  166532 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 11:44:01.942889  166532 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0203 11:44:01.942978  166532 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 11:44:01.943048  166532 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0203 11:44:01.943122  166532 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0203 11:44:01.943180  166532 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0203 11:44:01.943242  166532 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 11:44:01.943306  166532 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 11:44:01.943340  166532 kubeadm.go:310] [certs] Using the existing "sa" key
	I0203 11:44:01.943393  166532 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:44:02.097952  166532 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:44:02.366303  166532 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:44:02.672013  166532 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:44:03.179833  166532 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:44:03.197506  166532 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:44:03.198727  166532 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:44:03.198796  166532 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:44:03.333876  166532 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:44:03.336200  166532 out.go:235]   - Booting up control plane ...
	I0203 11:44:03.336349  166532 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:44:03.341330  166532 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:44:03.344199  166532 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:44:03.344329  166532 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:44:03.349348  166532 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 11:44:43.352000  166532 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0203 11:44:43.352109  166532 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:44:43.352322  166532 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:44:48.352775  166532 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:44:48.352986  166532 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:44:58.353582  166532 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:44:58.353869  166532 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:45:18.353244  166532 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:45:18.353551  166532 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:45:58.353113  166532 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:45:58.353373  166532 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:45:58.353503  166532 kubeadm.go:310] 
	I0203 11:45:58.353579  166532 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0203 11:45:58.353876  166532 kubeadm.go:310] 		timed out waiting for the condition
	I0203 11:45:58.353897  166532 kubeadm.go:310] 
	I0203 11:45:58.353935  166532 kubeadm.go:310] 	This error is likely caused by:
	I0203 11:45:58.353977  166532 kubeadm.go:310] 		- The kubelet is not running
	I0203 11:45:58.354137  166532 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 11:45:58.354163  166532 kubeadm.go:310] 
	I0203 11:45:58.354276  166532 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 11:45:58.354319  166532 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0203 11:45:58.354357  166532 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0203 11:45:58.354364  166532 kubeadm.go:310] 
	I0203 11:45:58.354477  166532 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 11:45:58.354563  166532 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 11:45:58.354597  166532 kubeadm.go:310] 
	I0203 11:45:58.354712  166532 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0203 11:45:58.354801  166532 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0203 11:45:58.354879  166532 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0203 11:45:58.354985  166532 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0203 11:45:58.354994  166532 kubeadm.go:310] 
	I0203 11:45:58.356694  166532 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:45:58.356800  166532 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 11:45:58.356868  166532 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0203 11:45:58.357016  166532 kubeadm.go:394] duration metric: took 3m57.222311714s to StartCluster
	I0203 11:45:58.357062  166532 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:45:58.357129  166532 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:45:58.409958  166532 cri.go:89] found id: ""
	I0203 11:45:58.410021  166532 logs.go:282] 0 containers: []
	W0203 11:45:58.410036  166532 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:45:58.410045  166532 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:45:58.410127  166532 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:45:58.455461  166532 cri.go:89] found id: ""
	I0203 11:45:58.455497  166532 logs.go:282] 0 containers: []
	W0203 11:45:58.455508  166532 logs.go:284] No container was found matching "etcd"
	I0203 11:45:58.455516  166532 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:45:58.455581  166532 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:45:58.490684  166532 cri.go:89] found id: ""
	I0203 11:45:58.490723  166532 logs.go:282] 0 containers: []
	W0203 11:45:58.490731  166532 logs.go:284] No container was found matching "coredns"
	I0203 11:45:58.490737  166532 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:45:58.490800  166532 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:45:58.527834  166532 cri.go:89] found id: ""
	I0203 11:45:58.527864  166532 logs.go:282] 0 containers: []
	W0203 11:45:58.527876  166532 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:45:58.527885  166532 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:45:58.527975  166532 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:45:58.566697  166532 cri.go:89] found id: ""
	I0203 11:45:58.566736  166532 logs.go:282] 0 containers: []
	W0203 11:45:58.566748  166532 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:45:58.566756  166532 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:45:58.566823  166532 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:45:58.628617  166532 cri.go:89] found id: ""
	I0203 11:45:58.628658  166532 logs.go:282] 0 containers: []
	W0203 11:45:58.628669  166532 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:45:58.628678  166532 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:45:58.628747  166532 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:45:58.670365  166532 cri.go:89] found id: ""
	I0203 11:45:58.670400  166532 logs.go:282] 0 containers: []
	W0203 11:45:58.670411  166532 logs.go:284] No container was found matching "kindnet"
	I0203 11:45:58.670426  166532 logs.go:123] Gathering logs for kubelet ...
	I0203 11:45:58.670445  166532 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:45:58.737071  166532 logs.go:123] Gathering logs for dmesg ...
	I0203 11:45:58.737111  166532 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:45:58.751076  166532 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:45:58.751118  166532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:45:58.900814  166532 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:45:58.900855  166532 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:45:58.900873  166532 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:45:59.010218  166532 logs.go:123] Gathering logs for container status ...
	I0203 11:45:59.010261  166532 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0203 11:45:59.050788  166532 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0203 11:45:59.050848  166532 out.go:270] * 
	* 
	W0203 11:45:59.050922  166532 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 11:45:59.050939  166532 out.go:270] * 
	* 
	W0203 11:45:59.051840  166532 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 11:45:59.055012  166532 out.go:201] 
	W0203 11:45:59.056333  166532 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 11:45:59.056391  166532 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0203 11:45:59.056420  166532 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0203 11:45:59.057989  166532 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-517711 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 6 (291.163426ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 11:45:59.402291  172462 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-517711" does not appear in /home/jenkins/minikube-integration/20354-109432/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-517711" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (272.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-517711 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-517711 create -f testdata/busybox.yaml: exit status 1 (56.143039ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-517711" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-517711 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711
E0203 11:45:59.540001  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 6 (274.242614ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 11:45:59.727131  172501 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-517711" does not appear in /home/jenkins/minikube-integration/20354-109432/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-517711" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 6 (266.019664ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 11:45:59.998880  172568 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-517711" does not appear in /home/jenkins/minikube-integration/20354-109432/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-517711" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (74.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-517711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0203 11:46:00.821396  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:03.383831  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:06.344076  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:08.505816  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:11.268257  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:18.747429  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:31.011958  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:39.229736  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:41.615859  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:41.622262  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:41.633640  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:41.655050  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:41.696469  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:41.778057  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:41.940133  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:42.262028  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:42.904153  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:44.185719  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:46.747868  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:51.869691  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:46:52.180170  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:01.719631  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:01.726088  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:01.737510  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:01.758985  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:01.800471  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:01.881980  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:02.043696  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:02.111231  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:02.365545  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:03.007973  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:04.290072  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:05.200341  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:06.852079  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:11.973692  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-517711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m14.255049516s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-517711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-517711 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-517711 describe deploy/metrics-server -n kube-system: exit status 1 (44.905539ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-517711" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-517711 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 6 (248.852183ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 11:47:14.539875  172935 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-517711" does not appear in /home/jenkins/minikube-integration/20354-109432/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-517711" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (74.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (514.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-517711 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0203 11:47:22.215250  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:22.592897  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:33.190179  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:42.696862  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:47:52.933541  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:48:03.555186  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:48:22.483557  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:48:23.659046  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:48:42.112956  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:48:50.185672  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:49:00.130352  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:49:08.317648  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:49:25.477387  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:49:36.021896  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:49:45.580846  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:49:49.330134  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:50:09.072195  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:50:17.032061  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-517711 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m32.433832397s)

                                                
                                                
-- stdout --
	* [old-k8s-version-517711] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-517711" primary control-plane node in "old-k8s-version-517711" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-517711" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:47:21.112590  173069 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:47:21.112715  173069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:47:21.112727  173069 out.go:358] Setting ErrFile to fd 2...
	I0203 11:47:21.112731  173069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:47:21.112916  173069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:47:21.113543  173069 out.go:352] Setting JSON to false
	I0203 11:47:21.114618  173069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8983,"bootTime":1738574258,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 11:47:21.114721  173069 start.go:139] virtualization: kvm guest
	I0203 11:47:21.117085  173069 out.go:177] * [old-k8s-version-517711] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 11:47:21.118477  173069 notify.go:220] Checking for updates...
	I0203 11:47:21.118500  173069 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:47:21.119825  173069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:47:21.121110  173069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:47:21.122182  173069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:47:21.123232  173069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 11:47:21.124410  173069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:47:21.125918  173069 config.go:182] Loaded profile config "old-k8s-version-517711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0203 11:47:21.126359  173069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:47:21.126407  173069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:47:21.142354  173069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46863
	I0203 11:47:21.142899  173069 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:47:21.143585  173069 main.go:141] libmachine: Using API Version  1
	I0203 11:47:21.143629  173069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:47:21.144092  173069 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:47:21.144288  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:47:21.146412  173069 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0203 11:47:21.147871  173069 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:47:21.148366  173069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:47:21.148426  173069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:47:21.166469  173069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41743
	I0203 11:47:21.166961  173069 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:47:21.167588  173069 main.go:141] libmachine: Using API Version  1
	I0203 11:47:21.167641  173069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:47:21.168102  173069 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:47:21.168364  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:47:21.208420  173069 out.go:177] * Using the kvm2 driver based on existing profile
	I0203 11:47:21.209604  173069 start.go:297] selected driver: kvm2
	I0203 11:47:21.209619  173069 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-517711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
17711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:47:21.209735  173069 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:47:21.210523  173069 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:47:21.210614  173069 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 11:47:21.230292  173069 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 11:47:21.230904  173069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:47:21.230954  173069 cni.go:84] Creating CNI manager for ""
	I0203 11:47:21.231019  173069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:47:21.231073  173069 start.go:340] cluster config:
	{Name:old-k8s-version-517711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-517711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:47:21.231269  173069 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:47:21.233655  173069 out.go:177] * Starting "old-k8s-version-517711" primary control-plane node in "old-k8s-version-517711" cluster
	I0203 11:47:21.235064  173069 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 11:47:21.235119  173069 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0203 11:47:21.235127  173069 cache.go:56] Caching tarball of preloaded images
	I0203 11:47:21.235232  173069 preload.go:172] Found /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0203 11:47:21.235243  173069 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0203 11:47:21.235345  173069 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/config.json ...
	I0203 11:47:21.235586  173069 start.go:360] acquireMachinesLock for old-k8s-version-517711: {Name:mk4d774b88f87fe0539ca3e30dd98aae8a4d5437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:47:21.235634  173069 start.go:364] duration metric: took 24.726µs to acquireMachinesLock for "old-k8s-version-517711"
	I0203 11:47:21.235653  173069 start.go:96] Skipping create...Using existing machine configuration
	I0203 11:47:21.235659  173069 fix.go:54] fixHost starting: 
	I0203 11:47:21.235928  173069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:47:21.235985  173069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:47:21.250820  173069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45673
	I0203 11:47:21.251594  173069 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:47:21.252582  173069 main.go:141] libmachine: Using API Version  1
	I0203 11:47:21.252606  173069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:47:21.253951  173069 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:47:21.254435  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:47:21.254661  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetState
	I0203 11:47:21.256510  173069 fix.go:112] recreateIfNeeded on old-k8s-version-517711: state=Stopped err=<nil>
	I0203 11:47:21.256537  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	W0203 11:47:21.256688  173069 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 11:47:21.258662  173069 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-517711" ...
	I0203 11:47:21.259984  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .Start
	I0203 11:47:21.260253  173069 main.go:141] libmachine: (old-k8s-version-517711) starting domain...
	I0203 11:47:21.260267  173069 main.go:141] libmachine: (old-k8s-version-517711) ensuring networks are active...
	I0203 11:47:21.261196  173069 main.go:141] libmachine: (old-k8s-version-517711) Ensuring network default is active
	I0203 11:47:21.261574  173069 main.go:141] libmachine: (old-k8s-version-517711) Ensuring network mk-old-k8s-version-517711 is active
	I0203 11:47:21.262059  173069 main.go:141] libmachine: (old-k8s-version-517711) getting domain XML...
	I0203 11:47:21.262985  173069 main.go:141] libmachine: (old-k8s-version-517711) creating domain...
	I0203 11:47:22.591820  173069 main.go:141] libmachine: (old-k8s-version-517711) waiting for IP...
	I0203 11:47:22.592726  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:22.593256  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:22.593340  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:22.593263  173105 retry.go:31] will retry after 200.227497ms: waiting for domain to come up
	I0203 11:47:22.794878  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:22.795451  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:22.795527  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:22.795433  173105 retry.go:31] will retry after 358.160393ms: waiting for domain to come up
	I0203 11:47:23.154932  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:23.155416  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:23.155445  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:23.155391  173105 retry.go:31] will retry after 330.547076ms: waiting for domain to come up
	I0203 11:47:23.487607  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:23.488225  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:23.488254  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:23.488125  173105 retry.go:31] will retry after 401.916476ms: waiting for domain to come up
	I0203 11:47:23.891857  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:23.892490  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:23.892527  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:23.892473  173105 retry.go:31] will retry after 598.875903ms: waiting for domain to come up
	I0203 11:47:24.493408  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:24.493888  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:24.493916  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:24.493841  173105 retry.go:31] will retry after 769.402004ms: waiting for domain to come up
	I0203 11:47:25.264480  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:25.264993  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:25.265025  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:25.264968  173105 retry.go:31] will retry after 1.142028234s: waiting for domain to come up
	I0203 11:47:26.408877  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:26.409400  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:26.409432  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:26.409350  173105 retry.go:31] will retry after 1.436115253s: waiting for domain to come up
	I0203 11:47:27.847128  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:27.847693  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:27.847722  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:27.847662  173105 retry.go:31] will retry after 1.220435557s: waiting for domain to come up
	I0203 11:47:29.070154  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:29.070805  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:29.070835  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:29.070752  173105 retry.go:31] will retry after 2.20391253s: waiting for domain to come up
	I0203 11:47:31.276360  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:31.276943  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:31.276971  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:31.276901  173105 retry.go:31] will retry after 2.700005051s: waiting for domain to come up
	I0203 11:47:33.979641  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:33.980170  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:33.980193  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:33.980148  173105 retry.go:31] will retry after 2.554431521s: waiting for domain to come up
	I0203 11:47:36.536099  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:36.536609  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | unable to find current IP address of domain old-k8s-version-517711 in network mk-old-k8s-version-517711
	I0203 11:47:36.536654  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | I0203 11:47:36.536592  173105 retry.go:31] will retry after 3.743395979s: waiting for domain to come up
	I0203 11:47:40.284174  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.284712  173069 main.go:141] libmachine: (old-k8s-version-517711) found domain IP: 192.168.61.203
	I0203 11:47:40.284743  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has current primary IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.284759  173069 main.go:141] libmachine: (old-k8s-version-517711) reserving static IP address...
	I0203 11:47:40.285255  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "old-k8s-version-517711", mac: "52:54:00:e5:0b:11", ip: "192.168.61.203"} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:40.285292  173069 main.go:141] libmachine: (old-k8s-version-517711) reserved static IP address 192.168.61.203 for domain old-k8s-version-517711
	I0203 11:47:40.285312  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | skip adding static IP to network mk-old-k8s-version-517711 - found existing host DHCP lease matching {name: "old-k8s-version-517711", mac: "52:54:00:e5:0b:11", ip: "192.168.61.203"}
	I0203 11:47:40.285333  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | Getting to WaitForSSH function...
	I0203 11:47:40.285346  173069 main.go:141] libmachine: (old-k8s-version-517711) waiting for SSH...
	I0203 11:47:40.287919  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.288298  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:40.288326  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.288473  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | Using SSH client type: external
	I0203 11:47:40.288496  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | Using SSH private key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa (-rw-------)
	I0203 11:47:40.288523  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0203 11:47:40.288535  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | About to run SSH command:
	I0203 11:47:40.288545  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | exit 0
	I0203 11:47:40.418077  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | SSH cmd err, output: <nil>: 
	I0203 11:47:40.418478  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetConfigRaw
	I0203 11:47:40.419145  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetIP
	I0203 11:47:40.421984  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.422422  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:40.422451  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.422725  173069 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/config.json ...
	I0203 11:47:40.422913  173069 machine.go:93] provisionDockerMachine start ...
	I0203 11:47:40.422931  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:47:40.423111  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:47:40.425148  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.425462  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:40.425489  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.425604  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:47:40.425824  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:40.426021  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:40.426180  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:47:40.426378  173069 main.go:141] libmachine: Using SSH client type: native
	I0203 11:47:40.426591  173069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0203 11:47:40.426602  173069 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:47:40.541950  173069 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 11:47:40.541984  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetMachineName
	I0203 11:47:40.542302  173069 buildroot.go:166] provisioning hostname "old-k8s-version-517711"
	I0203 11:47:40.542336  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetMachineName
	I0203 11:47:40.542543  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:47:40.545329  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.545675  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:40.545715  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.545829  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:47:40.546066  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:40.546265  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:40.546448  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:47:40.546657  173069 main.go:141] libmachine: Using SSH client type: native
	I0203 11:47:40.546850  173069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0203 11:47:40.546864  173069 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-517711 && echo "old-k8s-version-517711" | sudo tee /etc/hostname
	I0203 11:47:40.677771  173069 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-517711
	
	I0203 11:47:40.677798  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:47:40.680788  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.681159  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:40.681192  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.681383  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:47:40.681577  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:40.681753  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:40.681928  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:47:40.682137  173069 main.go:141] libmachine: Using SSH client type: native
	I0203 11:47:40.682304  173069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0203 11:47:40.682320  173069 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-517711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-517711/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-517711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:47:40.803931  173069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:47:40.803980  173069 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20354-109432/.minikube CaCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20354-109432/.minikube}
	I0203 11:47:40.804023  173069 buildroot.go:174] setting up certificates
	I0203 11:47:40.804033  173069 provision.go:84] configureAuth start
	I0203 11:47:40.804044  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetMachineName
	I0203 11:47:40.804363  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetIP
	I0203 11:47:40.806713  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.807040  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:40.807064  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.807228  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:47:40.809421  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.809722  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:40.809746  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:40.809923  173069 provision.go:143] copyHostCerts
	I0203 11:47:40.809984  173069 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem, removing ...
	I0203 11:47:40.810023  173069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem
	I0203 11:47:40.810108  173069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem (1078 bytes)
	I0203 11:47:40.810226  173069 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem, removing ...
	I0203 11:47:40.810236  173069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem
	I0203 11:47:40.810269  173069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem (1123 bytes)
	I0203 11:47:40.810342  173069 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem, removing ...
	I0203 11:47:40.810352  173069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem
	I0203 11:47:40.810385  173069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem (1679 bytes)
	I0203 11:47:40.810452  173069 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-517711 san=[127.0.0.1 192.168.61.203 localhost minikube old-k8s-version-517711]
	I0203 11:47:41.004432  173069 provision.go:177] copyRemoteCerts
	I0203 11:47:41.004488  173069 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:47:41.004521  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:47:41.007110  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.007441  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:41.007468  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.007628  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:47:41.007863  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:41.008056  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:47:41.008164  173069 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa Username:docker}
	I0203 11:47:41.099590  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 11:47:41.125074  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0203 11:47:41.149812  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 11:47:41.176183  173069 provision.go:87] duration metric: took 372.137398ms to configureAuth
	I0203 11:47:41.176218  173069 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:47:41.176449  173069 config.go:182] Loaded profile config "old-k8s-version-517711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0203 11:47:41.176533  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:47:41.179169  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.179529  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:41.179562  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.179725  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:47:41.179863  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:41.180036  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:41.180158  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:47:41.180327  173069 main.go:141] libmachine: Using SSH client type: native
	I0203 11:47:41.180558  173069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0203 11:47:41.180584  173069 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0203 11:47:41.421472  173069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0203 11:47:41.421499  173069 machine.go:96] duration metric: took 998.573209ms to provisionDockerMachine
	I0203 11:47:41.421511  173069 start.go:293] postStartSetup for "old-k8s-version-517711" (driver="kvm2")
	I0203 11:47:41.421521  173069 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:47:41.421539  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:47:41.421881  173069 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:47:41.421923  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:47:41.425056  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.425410  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:41.425437  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.425656  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:47:41.425897  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:41.426096  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:47:41.426271  173069 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa Username:docker}
	I0203 11:47:41.513381  173069 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:47:41.517930  173069 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:47:41.517960  173069 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/addons for local assets ...
	I0203 11:47:41.518063  173069 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/files for local assets ...
	I0203 11:47:41.518190  173069 filesync.go:149] local asset: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem -> 1166062.pem in /etc/ssl/certs
	I0203 11:47:41.518293  173069 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:47:41.528856  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:47:41.552913  173069 start.go:296] duration metric: took 131.382644ms for postStartSetup
	I0203 11:47:41.552953  173069 fix.go:56] duration metric: took 20.317294385s for fixHost
	I0203 11:47:41.552974  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:47:41.555678  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.556130  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:41.556162  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.556350  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:47:41.556508  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:41.556613  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:41.556757  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:47:41.556930  173069 main.go:141] libmachine: Using SSH client type: native
	I0203 11:47:41.557111  173069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0203 11:47:41.557122  173069 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:47:41.671707  173069 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738583261.647057893
	
	I0203 11:47:41.671733  173069 fix.go:216] guest clock: 1738583261.647057893
	I0203 11:47:41.671744  173069 fix.go:229] Guest: 2025-02-03 11:47:41.647057893 +0000 UTC Remote: 2025-02-03 11:47:41.552957821 +0000 UTC m=+20.481065227 (delta=94.100072ms)
	I0203 11:47:41.671772  173069 fix.go:200] guest clock delta is within tolerance: 94.100072ms
	I0203 11:47:41.671779  173069 start.go:83] releasing machines lock for "old-k8s-version-517711", held for 20.436137538s
	I0203 11:47:41.671805  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:47:41.672089  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetIP
	I0203 11:47:41.675000  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.675452  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:41.675485  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.675654  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:47:41.676229  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:47:41.676427  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .DriverName
	I0203 11:47:41.676512  173069 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 11:47:41.676584  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:47:41.676617  173069 ssh_runner.go:195] Run: cat /version.json
	I0203 11:47:41.676640  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHHostname
	I0203 11:47:41.679205  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.679507  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:41.679535  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.679560  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.679680  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:47:41.679851  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:41.679925  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:41.679952  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:41.680002  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:47:41.680130  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHPort
	I0203 11:47:41.680210  173069 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa Username:docker}
	I0203 11:47:41.680307  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHKeyPath
	I0203 11:47:41.680480  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetSSHUsername
	I0203 11:47:41.680640  173069 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/old-k8s-version-517711/id_rsa Username:docker}
	I0203 11:47:41.792885  173069 ssh_runner.go:195] Run: systemctl --version
	I0203 11:47:41.798852  173069 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0203 11:47:41.943546  173069 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 11:47:41.949426  173069 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:47:41.949509  173069 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:47:41.967394  173069 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:47:41.967418  173069 start.go:495] detecting cgroup driver to use...
	I0203 11:47:41.967482  173069 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:47:41.985396  173069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:47:42.000757  173069 docker.go:217] disabling cri-docker service (if available) ...
	I0203 11:47:42.000820  173069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0203 11:47:42.014475  173069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0203 11:47:42.028538  173069 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0203 11:47:42.172956  173069 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0203 11:47:42.328395  173069 docker.go:233] disabling docker service ...
	I0203 11:47:42.328475  173069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0203 11:47:42.343931  173069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0203 11:47:42.357277  173069 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0203 11:47:42.492149  173069 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0203 11:47:42.618554  173069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0203 11:47:42.632866  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:47:42.652760  173069 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0203 11:47:42.652831  173069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:47:42.663395  173069 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0203 11:47:42.663473  173069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:47:42.674972  173069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:47:42.685288  173069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:47:42.695647  173069 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:47:42.705483  173069 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:47:42.714360  173069 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:47:42.714424  173069 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:47:42.727926  173069 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:47:42.737410  173069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:47:42.857417  173069 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0203 11:47:42.945630  173069 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0203 11:47:42.945720  173069 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0203 11:47:42.950641  173069 start.go:563] Will wait 60s for crictl version
	I0203 11:47:42.950697  173069 ssh_runner.go:195] Run: which crictl
	I0203 11:47:42.954251  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:47:42.996981  173069 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0203 11:47:42.997062  173069 ssh_runner.go:195] Run: crio --version
	I0203 11:47:43.024416  173069 ssh_runner.go:195] Run: crio --version
	I0203 11:47:43.057591  173069 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0203 11:47:43.058811  173069 main.go:141] libmachine: (old-k8s-version-517711) Calling .GetIP
	I0203 11:47:43.061790  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:43.062181  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0b:11", ip: ""} in network mk-old-k8s-version-517711: {Iface:virbr1 ExpiryTime:2025-02-03 12:47:32 +0000 UTC Type:0 Mac:52:54:00:e5:0b:11 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:old-k8s-version-517711 Clientid:01:52:54:00:e5:0b:11}
	I0203 11:47:43.062226  173069 main.go:141] libmachine: (old-k8s-version-517711) DBG | domain old-k8s-version-517711 has defined IP address 192.168.61.203 and MAC address 52:54:00:e5:0b:11 in network mk-old-k8s-version-517711
	I0203 11:47:43.062453  173069 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0203 11:47:43.066650  173069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:47:43.079609  173069 kubeadm.go:883] updating cluster {Name:old-k8s-version-517711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-517711 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 11:47:43.079718  173069 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 11:47:43.079758  173069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:47:43.128555  173069 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0203 11:47:43.128620  173069 ssh_runner.go:195] Run: which lz4
	I0203 11:47:43.132628  173069 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0203 11:47:43.136646  173069 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 11:47:43.136668  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0203 11:47:44.590412  173069 crio.go:462] duration metric: took 1.457795945s to copy over tarball
	I0203 11:47:44.590520  173069 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0203 11:47:47.525064  173069 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.934509318s)
	I0203 11:47:47.525108  173069 crio.go:469] duration metric: took 2.934649078s to extract the tarball
	I0203 11:47:47.525117  173069 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0203 11:47:47.567588  173069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:47:47.600477  173069 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0203 11:47:47.600503  173069 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0203 11:47:47.600551  173069 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:47:47.600619  173069 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0203 11:47:47.600634  173069 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:47:47.600604  173069 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:47:47.600677  173069 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0203 11:47:47.600640  173069 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:47:47.600661  173069 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0203 11:47:47.600588  173069 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:47:47.602139  173069 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0203 11:47:47.602225  173069 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:47:47.602244  173069 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:47:47.602247  173069 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:47:47.602244  173069 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:47:47.602268  173069 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0203 11:47:47.602301  173069 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0203 11:47:47.602316  173069 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:47:47.816788  173069 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:47:47.816796  173069 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0203 11:47:47.818823  173069 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:47:47.819740  173069 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:47:47.825352  173069 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0203 11:47:47.834792  173069 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0203 11:47:47.851562  173069 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:47:47.970767  173069 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0203 11:47:47.970839  173069 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:47:47.970851  173069 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0203 11:47:47.970889  173069 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0203 11:47:47.970894  173069 ssh_runner.go:195] Run: which crictl
	I0203 11:47:47.970930  173069 ssh_runner.go:195] Run: which crictl
	I0203 11:47:47.996414  173069 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0203 11:47:47.996473  173069 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:47:47.996522  173069 ssh_runner.go:195] Run: which crictl
	I0203 11:47:47.997067  173069 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0203 11:47:47.997094  173069 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0203 11:47:47.997109  173069 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0203 11:47:47.997129  173069 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:47:47.997153  173069 ssh_runner.go:195] Run: which crictl
	I0203 11:47:47.997175  173069 ssh_runner.go:195] Run: which crictl
	I0203 11:47:48.012338  173069 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0203 11:47:48.012384  173069 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0203 11:47:48.012440  173069 ssh_runner.go:195] Run: which crictl
	I0203 11:47:48.022129  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0203 11:47:48.022167  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:47:48.022172  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:47:48.022144  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:47:48.022234  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0203 11:47:48.022252  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0203 11:47:48.022345  173069 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0203 11:47:48.022382  173069 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:47:48.022416  173069 ssh_runner.go:195] Run: which crictl
	I0203 11:47:48.139140  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:47:48.139485  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0203 11:47:48.158180  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0203 11:47:48.158306  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:47:48.158366  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:47:48.158366  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:47:48.158310  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0203 11:47:48.248249  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:47:48.284939  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0203 11:47:48.294728  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0203 11:47:48.319920  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0203 11:47:48.319920  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0203 11:47:48.325611  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0203 11:47:48.325663  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0203 11:47:48.399198  173069 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0203 11:47:48.437674  173069 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0203 11:47:48.453480  173069 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0203 11:47:48.461436  173069 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0203 11:47:48.476550  173069 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0203 11:47:48.490975  173069 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0203 11:47:48.491004  173069 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0203 11:47:48.500308  173069 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0203 11:47:48.784553  173069 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:47:48.932852  173069 cache_images.go:92] duration metric: took 1.332329876s to LoadCachedImages
	W0203 11:47:48.932991  173069 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20354-109432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0203 11:47:48.933024  173069 kubeadm.go:934] updating node { 192.168.61.203 8443 v1.20.0 crio true true} ...
	I0203 11:47:48.933164  173069 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-517711 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-517711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:47:48.933249  173069 ssh_runner.go:195] Run: crio config
	I0203 11:47:48.983214  173069 cni.go:84] Creating CNI manager for ""
	I0203 11:47:48.983238  173069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:47:48.983248  173069 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 11:47:48.983269  173069 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.203 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-517711 NodeName:old-k8s-version-517711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0203 11:47:48.983393  173069 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-517711"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 11:47:48.983451  173069 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0203 11:47:48.993840  173069 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 11:47:48.993931  173069 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 11:47:49.003342  173069 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0203 11:47:49.019789  173069 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:47:49.039080  173069 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0203 11:47:49.055983  173069 ssh_runner.go:195] Run: grep 192.168.61.203	control-plane.minikube.internal$ /etc/hosts
	I0203 11:47:49.059772  173069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:47:49.073028  173069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:47:49.199088  173069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:47:49.219080  173069 certs.go:68] Setting up /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711 for IP: 192.168.61.203
	I0203 11:47:49.219100  173069 certs.go:194] generating shared ca certs ...
	I0203 11:47:49.219116  173069 certs.go:226] acquiring lock for ca certs: {Name:mkceafe81f89678b7cbc2a7f6faab4e784fcb207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:47:49.219267  173069 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key
	I0203 11:47:49.219307  173069 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key
	I0203 11:47:49.219315  173069 certs.go:256] generating profile certs ...
	I0203 11:47:49.219411  173069 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/client.key
	I0203 11:47:49.219466  173069 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.key.067e8325
	I0203 11:47:49.219498  173069 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/proxy-client.key
	I0203 11:47:49.219611  173069 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem (1338 bytes)
	W0203 11:47:49.219651  173069 certs.go:480] ignoring /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606_empty.pem, impossibly tiny 0 bytes
	I0203 11:47:49.219663  173069 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 11:47:49.219685  173069 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem (1078 bytes)
	I0203 11:47:49.219706  173069 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem (1123 bytes)
	I0203 11:47:49.219726  173069 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem (1679 bytes)
	I0203 11:47:49.219766  173069 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:47:49.220463  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:47:49.282602  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 11:47:49.308802  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:47:49.340952  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 11:47:49.368610  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0203 11:47:49.416989  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 11:47:49.445484  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:47:49.491961  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/old-k8s-version-517711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 11:47:49.517282  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:47:49.543340  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem --> /usr/share/ca-certificates/116606.pem (1338 bytes)
	I0203 11:47:49.569106  173069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /usr/share/ca-certificates/1166062.pem (1708 bytes)
	I0203 11:47:49.594673  173069 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:47:49.611823  173069 ssh_runner.go:195] Run: openssl version
	I0203 11:47:49.617511  173069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116606.pem && ln -fs /usr/share/ca-certificates/116606.pem /etc/ssl/certs/116606.pem"
	I0203 11:47:49.628955  173069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116606.pem
	I0203 11:47:49.633603  173069 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:41 /usr/share/ca-certificates/116606.pem
	I0203 11:47:49.633677  173069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116606.pem
	I0203 11:47:49.639498  173069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116606.pem /etc/ssl/certs/51391683.0"
	I0203 11:47:49.651116  173069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1166062.pem && ln -fs /usr/share/ca-certificates/1166062.pem /etc/ssl/certs/1166062.pem"
	I0203 11:47:49.664193  173069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1166062.pem
	I0203 11:47:49.669164  173069 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:41 /usr/share/ca-certificates/1166062.pem
	I0203 11:47:49.669319  173069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1166062.pem
	I0203 11:47:49.675267  173069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1166062.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:47:49.686895  173069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:47:49.698847  173069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:47:49.703831  173069 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:47:49.703891  173069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:47:49.709779  173069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:47:49.721403  173069 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:47:49.725963  173069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0203 11:47:49.732545  173069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0203 11:47:49.738396  173069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0203 11:47:49.744586  173069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0203 11:47:49.750509  173069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0203 11:47:49.756415  173069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0203 11:47:49.762162  173069 kubeadm.go:392] StartCluster: {Name:old-k8s-version-517711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-517711 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:47:49.762269  173069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 11:47:49.762325  173069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:47:49.803601  173069 cri.go:89] found id: ""
	I0203 11:47:49.803683  173069 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 11:47:49.814794  173069 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0203 11:47:49.814813  173069 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0203 11:47:49.814866  173069 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0203 11:47:49.824276  173069 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0203 11:47:49.825155  173069 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-517711" does not appear in /home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:47:49.825746  173069 kubeconfig.go:62] /home/jenkins/minikube-integration/20354-109432/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-517711" cluster setting kubeconfig missing "old-k8s-version-517711" context setting]
	I0203 11:47:49.826783  173069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/kubeconfig: {Name:mkcb7c4c45c6b828504faaa9fea59b0bb0855286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:47:49.913910  173069 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0203 11:47:49.926080  173069 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.203
	I0203 11:47:49.926133  173069 kubeadm.go:1160] stopping kube-system containers ...
	I0203 11:47:49.926152  173069 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0203 11:47:49.926225  173069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:47:49.965558  173069 cri.go:89] found id: ""
	I0203 11:47:49.965625  173069 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 11:47:49.981793  173069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:47:49.991530  173069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:47:49.991553  173069 kubeadm.go:157] found existing configuration files:
	
	I0203 11:47:49.991607  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:47:50.000521  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:47:50.000579  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:47:50.010014  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:47:50.018630  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:47:50.018689  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:47:50.028131  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:47:50.037135  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:47:50.037193  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:47:50.047549  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:47:50.056669  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:47:50.056734  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:47:50.065950  173069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:47:50.075803  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:47:50.194280  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:47:51.334476  173069 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.140151525s)
	I0203 11:47:51.334509  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:47:51.571491  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:47:51.702796  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:47:51.781063  173069 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:47:51.781169  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:52.281566  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:52.782135  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:53.281664  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:53.781882  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:54.282199  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:54.782252  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:55.281468  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:55.781542  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:56.281801  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:56.782261  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:57.281374  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:57.782262  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:58.281281  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:58.782291  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:59.281570  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:47:59.781963  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:00.282205  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:00.782150  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:01.281511  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:01.782285  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:02.282084  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:02.781430  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:03.281918  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:03.781372  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:04.281944  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:04.781330  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:05.282180  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:05.782148  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:06.282215  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:06.781773  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:07.281914  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:07.782165  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:08.281322  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:08.782153  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:09.282238  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:09.781235  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:10.281891  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:10.781354  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:11.281602  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:11.781818  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:12.281486  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:12.781872  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:13.281866  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:13.781367  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:14.281498  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:14.781586  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:15.282082  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:15.781536  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:16.281691  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:16.781275  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:17.281807  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:17.781779  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:18.281514  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:18.781555  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:19.281715  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:19.782045  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:20.281788  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:20.781915  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:21.281514  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:21.781589  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:22.282146  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:22.781701  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:23.281299  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:23.782044  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:24.282151  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:24.781945  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:25.282122  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:25.781566  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:26.281595  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:26.781735  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:27.281609  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:27.781975  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:28.281959  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:28.781358  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:29.281887  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:29.781570  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:30.282066  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:30.782168  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:31.281782  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:31.781985  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:32.281432  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:32.782033  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:33.281362  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:33.781815  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:34.281871  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:34.782276  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:35.281370  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:35.782052  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:36.282166  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:36.781481  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:37.282124  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:37.782029  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:38.281434  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:38.782106  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:39.281833  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:39.782041  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:40.281530  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:40.781702  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:41.282213  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:41.782208  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:42.281600  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:42.782215  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:43.281350  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:43.781697  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:44.282122  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:44.782158  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:45.281329  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:45.781741  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:46.281877  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:46.781834  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:47.281275  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:47.781857  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:48.281574  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:48.782010  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:49.281728  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:49.782123  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:50.281582  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:50.781491  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:51.281906  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:51.781471  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:48:51.781548  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:48:51.822216  173069 cri.go:89] found id: ""
	I0203 11:48:51.822243  173069 logs.go:282] 0 containers: []
	W0203 11:48:51.822252  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:48:51.822259  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:48:51.822330  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:48:51.864063  173069 cri.go:89] found id: ""
	I0203 11:48:51.864101  173069 logs.go:282] 0 containers: []
	W0203 11:48:51.864113  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:48:51.864121  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:48:51.864191  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:48:51.901118  173069 cri.go:89] found id: ""
	I0203 11:48:51.901146  173069 logs.go:282] 0 containers: []
	W0203 11:48:51.901156  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:48:51.901176  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:48:51.901243  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:48:51.934437  173069 cri.go:89] found id: ""
	I0203 11:48:51.934463  173069 logs.go:282] 0 containers: []
	W0203 11:48:51.934471  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:48:51.934477  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:48:51.934531  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:48:51.973372  173069 cri.go:89] found id: ""
	I0203 11:48:51.973402  173069 logs.go:282] 0 containers: []
	W0203 11:48:51.973409  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:48:51.973416  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:48:51.973472  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:48:52.021104  173069 cri.go:89] found id: ""
	I0203 11:48:52.021131  173069 logs.go:282] 0 containers: []
	W0203 11:48:52.021139  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:48:52.021145  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:48:52.021213  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:48:52.057444  173069 cri.go:89] found id: ""
	I0203 11:48:52.057479  173069 logs.go:282] 0 containers: []
	W0203 11:48:52.057491  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:48:52.057500  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:48:52.057562  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:48:52.095413  173069 cri.go:89] found id: ""
	I0203 11:48:52.095440  173069 logs.go:282] 0 containers: []
	W0203 11:48:52.095448  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:48:52.095457  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:48:52.095471  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:48:52.228229  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:48:52.228263  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:48:52.228278  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:48:52.309968  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:48:52.310018  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:48:52.349618  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:48:52.349648  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:48:52.400367  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:48:52.400406  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:48:54.914541  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:54.928736  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:48:54.928799  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:48:54.965955  173069 cri.go:89] found id: ""
	I0203 11:48:54.965986  173069 logs.go:282] 0 containers: []
	W0203 11:48:54.966019  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:48:54.966028  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:48:54.966096  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:48:54.998441  173069 cri.go:89] found id: ""
	I0203 11:48:54.998475  173069 logs.go:282] 0 containers: []
	W0203 11:48:54.998486  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:48:54.998494  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:48:54.998560  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:48:55.030014  173069 cri.go:89] found id: ""
	I0203 11:48:55.030060  173069 logs.go:282] 0 containers: []
	W0203 11:48:55.030077  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:48:55.030086  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:48:55.030162  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:48:55.076799  173069 cri.go:89] found id: ""
	I0203 11:48:55.076830  173069 logs.go:282] 0 containers: []
	W0203 11:48:55.076841  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:48:55.076850  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:48:55.076917  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:48:55.133955  173069 cri.go:89] found id: ""
	I0203 11:48:55.133991  173069 logs.go:282] 0 containers: []
	W0203 11:48:55.134021  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:48:55.134031  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:48:55.134100  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:48:55.194338  173069 cri.go:89] found id: ""
	I0203 11:48:55.194368  173069 logs.go:282] 0 containers: []
	W0203 11:48:55.194376  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:48:55.194384  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:48:55.194456  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:48:55.227542  173069 cri.go:89] found id: ""
	I0203 11:48:55.227577  173069 logs.go:282] 0 containers: []
	W0203 11:48:55.227589  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:48:55.227597  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:48:55.227658  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:48:55.263976  173069 cri.go:89] found id: ""
	I0203 11:48:55.264007  173069 logs.go:282] 0 containers: []
	W0203 11:48:55.264019  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:48:55.264032  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:48:55.264055  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:48:55.276722  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:48:55.276750  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:48:55.351501  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:48:55.351529  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:48:55.351544  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:48:55.422328  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:48:55.422369  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:48:55.462743  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:48:55.462777  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:48:58.015247  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:48:58.028095  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:48:58.028170  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:48:58.063684  173069 cri.go:89] found id: ""
	I0203 11:48:58.063719  173069 logs.go:282] 0 containers: []
	W0203 11:48:58.063731  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:48:58.063745  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:48:58.063805  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:48:58.097144  173069 cri.go:89] found id: ""
	I0203 11:48:58.097168  173069 logs.go:282] 0 containers: []
	W0203 11:48:58.097176  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:48:58.097182  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:48:58.097232  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:48:58.130676  173069 cri.go:89] found id: ""
	I0203 11:48:58.130706  173069 logs.go:282] 0 containers: []
	W0203 11:48:58.130722  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:48:58.130728  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:48:58.130777  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:48:58.165993  173069 cri.go:89] found id: ""
	I0203 11:48:58.166030  173069 logs.go:282] 0 containers: []
	W0203 11:48:58.166038  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:48:58.166045  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:48:58.166140  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:48:58.199186  173069 cri.go:89] found id: ""
	I0203 11:48:58.199214  173069 logs.go:282] 0 containers: []
	W0203 11:48:58.199222  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:48:58.199229  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:48:58.199291  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:48:58.231755  173069 cri.go:89] found id: ""
	I0203 11:48:58.231785  173069 logs.go:282] 0 containers: []
	W0203 11:48:58.231796  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:48:58.231805  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:48:58.231871  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:48:58.264377  173069 cri.go:89] found id: ""
	I0203 11:48:58.264407  173069 logs.go:282] 0 containers: []
	W0203 11:48:58.264418  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:48:58.264426  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:48:58.264484  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:48:58.297071  173069 cri.go:89] found id: ""
	I0203 11:48:58.297096  173069 logs.go:282] 0 containers: []
	W0203 11:48:58.297103  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:48:58.297112  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:48:58.297126  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:48:58.337034  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:48:58.337069  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:48:58.390378  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:48:58.390428  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:48:58.403072  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:48:58.403101  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:48:58.474684  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:48:58.474708  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:48:58.474727  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:01.053754  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:01.067055  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:01.067149  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:01.102293  173069 cri.go:89] found id: ""
	I0203 11:49:01.102320  173069 logs.go:282] 0 containers: []
	W0203 11:49:01.102333  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:01.102342  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:01.102420  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:01.136304  173069 cri.go:89] found id: ""
	I0203 11:49:01.136336  173069 logs.go:282] 0 containers: []
	W0203 11:49:01.136348  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:01.136357  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:01.136433  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:01.171433  173069 cri.go:89] found id: ""
	I0203 11:49:01.171464  173069 logs.go:282] 0 containers: []
	W0203 11:49:01.171473  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:01.171480  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:01.171542  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:01.207736  173069 cri.go:89] found id: ""
	I0203 11:49:01.207772  173069 logs.go:282] 0 containers: []
	W0203 11:49:01.207783  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:01.207790  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:01.207850  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:01.244223  173069 cri.go:89] found id: ""
	I0203 11:49:01.244261  173069 logs.go:282] 0 containers: []
	W0203 11:49:01.244275  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:01.244284  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:01.244354  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:01.283526  173069 cri.go:89] found id: ""
	I0203 11:49:01.283552  173069 logs.go:282] 0 containers: []
	W0203 11:49:01.283560  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:01.283566  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:01.283616  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:01.321928  173069 cri.go:89] found id: ""
	I0203 11:49:01.321956  173069 logs.go:282] 0 containers: []
	W0203 11:49:01.321966  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:01.321974  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:01.322069  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:01.361180  173069 cri.go:89] found id: ""
	I0203 11:49:01.361212  173069 logs.go:282] 0 containers: []
	W0203 11:49:01.361221  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:01.361232  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:01.361245  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:01.398952  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:01.398982  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:01.450220  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:01.450269  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:01.463919  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:01.463958  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:01.534478  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:01.534506  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:01.534524  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:04.119240  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:04.133926  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:04.133991  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:04.171282  173069 cri.go:89] found id: ""
	I0203 11:49:04.171312  173069 logs.go:282] 0 containers: []
	W0203 11:49:04.171321  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:04.171329  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:04.171390  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:04.205070  173069 cri.go:89] found id: ""
	I0203 11:49:04.205111  173069 logs.go:282] 0 containers: []
	W0203 11:49:04.205136  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:04.205146  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:04.205215  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:04.240858  173069 cri.go:89] found id: ""
	I0203 11:49:04.240883  173069 logs.go:282] 0 containers: []
	W0203 11:49:04.240892  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:04.240899  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:04.240950  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:04.275416  173069 cri.go:89] found id: ""
	I0203 11:49:04.275454  173069 logs.go:282] 0 containers: []
	W0203 11:49:04.275466  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:04.275475  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:04.275539  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:04.313140  173069 cri.go:89] found id: ""
	I0203 11:49:04.313175  173069 logs.go:282] 0 containers: []
	W0203 11:49:04.313188  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:04.313196  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:04.313264  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:04.349587  173069 cri.go:89] found id: ""
	I0203 11:49:04.349616  173069 logs.go:282] 0 containers: []
	W0203 11:49:04.349626  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:04.349635  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:04.349697  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:04.382688  173069 cri.go:89] found id: ""
	I0203 11:49:04.382721  173069 logs.go:282] 0 containers: []
	W0203 11:49:04.382734  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:04.382750  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:04.382824  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:04.415893  173069 cri.go:89] found id: ""
	I0203 11:49:04.415924  173069 logs.go:282] 0 containers: []
	W0203 11:49:04.415935  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:04.415949  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:04.415965  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:04.465404  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:04.465451  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:04.478906  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:04.478933  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:04.563021  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:04.563051  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:04.563069  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:04.641514  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:04.641563  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:07.185230  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:07.198300  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:07.198362  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:07.231743  173069 cri.go:89] found id: ""
	I0203 11:49:07.231777  173069 logs.go:282] 0 containers: []
	W0203 11:49:07.231787  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:07.231796  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:07.231856  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:07.268515  173069 cri.go:89] found id: ""
	I0203 11:49:07.268542  173069 logs.go:282] 0 containers: []
	W0203 11:49:07.268560  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:07.268569  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:07.268643  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:07.303134  173069 cri.go:89] found id: ""
	I0203 11:49:07.303162  173069 logs.go:282] 0 containers: []
	W0203 11:49:07.303170  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:07.303176  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:07.303269  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:07.338779  173069 cri.go:89] found id: ""
	I0203 11:49:07.338811  173069 logs.go:282] 0 containers: []
	W0203 11:49:07.338820  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:07.338826  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:07.338879  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:07.371618  173069 cri.go:89] found id: ""
	I0203 11:49:07.371708  173069 logs.go:282] 0 containers: []
	W0203 11:49:07.371725  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:07.371735  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:07.371810  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:07.410147  173069 cri.go:89] found id: ""
	I0203 11:49:07.410185  173069 logs.go:282] 0 containers: []
	W0203 11:49:07.410263  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:07.410289  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:07.410360  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:07.449299  173069 cri.go:89] found id: ""
	I0203 11:49:07.449328  173069 logs.go:282] 0 containers: []
	W0203 11:49:07.449336  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:07.449342  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:07.449404  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:07.487077  173069 cri.go:89] found id: ""
	I0203 11:49:07.487115  173069 logs.go:282] 0 containers: []
	W0203 11:49:07.487124  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:07.487133  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:07.487147  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:07.563448  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:07.563492  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:07.603826  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:07.603857  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:07.655186  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:07.655221  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:07.668256  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:07.668287  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:07.735390  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:10.236336  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:10.249443  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:10.249522  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:10.283043  173069 cri.go:89] found id: ""
	I0203 11:49:10.283073  173069 logs.go:282] 0 containers: []
	W0203 11:49:10.283082  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:10.283088  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:10.283148  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:10.317869  173069 cri.go:89] found id: ""
	I0203 11:49:10.317904  173069 logs.go:282] 0 containers: []
	W0203 11:49:10.317915  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:10.317923  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:10.317985  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:10.354278  173069 cri.go:89] found id: ""
	I0203 11:49:10.354304  173069 logs.go:282] 0 containers: []
	W0203 11:49:10.354313  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:10.354322  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:10.354398  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:10.388283  173069 cri.go:89] found id: ""
	I0203 11:49:10.388312  173069 logs.go:282] 0 containers: []
	W0203 11:49:10.388320  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:10.388327  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:10.388384  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:10.424331  173069 cri.go:89] found id: ""
	I0203 11:49:10.424358  173069 logs.go:282] 0 containers: []
	W0203 11:49:10.424367  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:10.424373  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:10.424438  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:10.457645  173069 cri.go:89] found id: ""
	I0203 11:49:10.457680  173069 logs.go:282] 0 containers: []
	W0203 11:49:10.457701  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:10.457711  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:10.457778  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:10.490521  173069 cri.go:89] found id: ""
	I0203 11:49:10.490546  173069 logs.go:282] 0 containers: []
	W0203 11:49:10.490554  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:10.490560  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:10.490613  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:10.523564  173069 cri.go:89] found id: ""
	I0203 11:49:10.523593  173069 logs.go:282] 0 containers: []
	W0203 11:49:10.523606  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:10.523619  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:10.523638  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:10.536689  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:10.536716  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:10.607551  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:10.607576  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:10.607593  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:10.690726  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:10.690765  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:10.731628  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:10.731665  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:13.286816  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:13.300466  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:13.300551  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:13.337216  173069 cri.go:89] found id: ""
	I0203 11:49:13.337254  173069 logs.go:282] 0 containers: []
	W0203 11:49:13.337267  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:13.337275  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:13.337345  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:13.374237  173069 cri.go:89] found id: ""
	I0203 11:49:13.374273  173069 logs.go:282] 0 containers: []
	W0203 11:49:13.374282  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:13.374288  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:13.374341  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:13.411306  173069 cri.go:89] found id: ""
	I0203 11:49:13.411357  173069 logs.go:282] 0 containers: []
	W0203 11:49:13.411370  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:13.411379  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:13.411448  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:13.450879  173069 cri.go:89] found id: ""
	I0203 11:49:13.450914  173069 logs.go:282] 0 containers: []
	W0203 11:49:13.450926  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:13.450935  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:13.451004  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:13.484236  173069 cri.go:89] found id: ""
	I0203 11:49:13.484274  173069 logs.go:282] 0 containers: []
	W0203 11:49:13.484287  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:13.484296  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:13.484359  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:13.520037  173069 cri.go:89] found id: ""
	I0203 11:49:13.520070  173069 logs.go:282] 0 containers: []
	W0203 11:49:13.520082  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:13.520090  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:13.520154  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:13.554472  173069 cri.go:89] found id: ""
	I0203 11:49:13.554502  173069 logs.go:282] 0 containers: []
	W0203 11:49:13.554512  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:13.554522  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:13.554587  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:13.591584  173069 cri.go:89] found id: ""
	I0203 11:49:13.591616  173069 logs.go:282] 0 containers: []
	W0203 11:49:13.591629  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:13.591642  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:13.591661  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:13.640929  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:13.640965  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:13.655850  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:13.655879  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:13.734234  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:13.734259  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:13.734277  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:13.818524  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:13.818570  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:16.359107  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:16.372708  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:16.372796  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:16.405993  173069 cri.go:89] found id: ""
	I0203 11:49:16.406038  173069 logs.go:282] 0 containers: []
	W0203 11:49:16.406049  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:16.406058  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:16.406142  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:16.443679  173069 cri.go:89] found id: ""
	I0203 11:49:16.443710  173069 logs.go:282] 0 containers: []
	W0203 11:49:16.443719  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:16.443725  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:16.443777  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:16.481576  173069 cri.go:89] found id: ""
	I0203 11:49:16.481611  173069 logs.go:282] 0 containers: []
	W0203 11:49:16.481623  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:16.481632  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:16.481698  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:16.520191  173069 cri.go:89] found id: ""
	I0203 11:49:16.520221  173069 logs.go:282] 0 containers: []
	W0203 11:49:16.520231  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:16.520239  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:16.520306  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:16.557505  173069 cri.go:89] found id: ""
	I0203 11:49:16.557533  173069 logs.go:282] 0 containers: []
	W0203 11:49:16.557542  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:16.557548  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:16.557616  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:16.591870  173069 cri.go:89] found id: ""
	I0203 11:49:16.591900  173069 logs.go:282] 0 containers: []
	W0203 11:49:16.591911  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:16.591920  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:16.591983  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:16.631394  173069 cri.go:89] found id: ""
	I0203 11:49:16.631420  173069 logs.go:282] 0 containers: []
	W0203 11:49:16.631428  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:16.631433  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:16.631485  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:16.669016  173069 cri.go:89] found id: ""
	I0203 11:49:16.669049  173069 logs.go:282] 0 containers: []
	W0203 11:49:16.669061  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:16.669071  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:16.669086  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:16.745960  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:16.745990  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:16.746033  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:16.824724  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:16.824766  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:16.866944  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:16.866981  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:16.915728  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:16.915767  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:19.430624  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:19.444814  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:19.444894  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:19.487949  173069 cri.go:89] found id: ""
	I0203 11:49:19.487982  173069 logs.go:282] 0 containers: []
	W0203 11:49:19.487994  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:19.488002  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:19.488069  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:19.520049  173069 cri.go:89] found id: ""
	I0203 11:49:19.520075  173069 logs.go:282] 0 containers: []
	W0203 11:49:19.520083  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:19.520090  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:19.520154  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:19.552502  173069 cri.go:89] found id: ""
	I0203 11:49:19.552534  173069 logs.go:282] 0 containers: []
	W0203 11:49:19.552545  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:19.552554  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:19.552622  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:19.584672  173069 cri.go:89] found id: ""
	I0203 11:49:19.584698  173069 logs.go:282] 0 containers: []
	W0203 11:49:19.584706  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:19.584713  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:19.584777  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:19.616824  173069 cri.go:89] found id: ""
	I0203 11:49:19.616853  173069 logs.go:282] 0 containers: []
	W0203 11:49:19.616861  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:19.616867  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:19.616928  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:19.649574  173069 cri.go:89] found id: ""
	I0203 11:49:19.649607  173069 logs.go:282] 0 containers: []
	W0203 11:49:19.649617  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:19.649623  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:19.649671  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:19.689598  173069 cri.go:89] found id: ""
	I0203 11:49:19.689632  173069 logs.go:282] 0 containers: []
	W0203 11:49:19.689643  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:19.689651  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:19.689735  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:19.721498  173069 cri.go:89] found id: ""
	I0203 11:49:19.721526  173069 logs.go:282] 0 containers: []
	W0203 11:49:19.721534  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:19.721544  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:19.721557  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:19.759830  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:19.759866  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:19.813945  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:19.813985  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:19.827805  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:19.827831  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:19.909227  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:19.909255  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:19.909271  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:22.487443  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:22.501049  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:22.501122  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:22.533253  173069 cri.go:89] found id: ""
	I0203 11:49:22.533282  173069 logs.go:282] 0 containers: []
	W0203 11:49:22.533292  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:22.533302  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:22.533364  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:22.564420  173069 cri.go:89] found id: ""
	I0203 11:49:22.564450  173069 logs.go:282] 0 containers: []
	W0203 11:49:22.564462  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:22.564470  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:22.564535  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:22.598965  173069 cri.go:89] found id: ""
	I0203 11:49:22.598993  173069 logs.go:282] 0 containers: []
	W0203 11:49:22.599002  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:22.599008  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:22.599092  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:22.632442  173069 cri.go:89] found id: ""
	I0203 11:49:22.632470  173069 logs.go:282] 0 containers: []
	W0203 11:49:22.632478  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:22.632485  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:22.632554  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:22.666544  173069 cri.go:89] found id: ""
	I0203 11:49:22.666583  173069 logs.go:282] 0 containers: []
	W0203 11:49:22.666596  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:22.666606  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:22.666671  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:22.705279  173069 cri.go:89] found id: ""
	I0203 11:49:22.705311  173069 logs.go:282] 0 containers: []
	W0203 11:49:22.705320  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:22.705327  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:22.705380  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:22.742863  173069 cri.go:89] found id: ""
	I0203 11:49:22.742891  173069 logs.go:282] 0 containers: []
	W0203 11:49:22.742900  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:22.742906  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:22.742962  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:22.780045  173069 cri.go:89] found id: ""
	I0203 11:49:22.780075  173069 logs.go:282] 0 containers: []
	W0203 11:49:22.780100  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:22.780114  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:22.780141  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:22.836699  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:22.836739  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:22.853278  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:22.853309  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:22.938252  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:22.938278  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:22.938294  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:23.021029  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:23.021071  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:25.557963  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:25.572407  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:25.572475  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:25.608045  173069 cri.go:89] found id: ""
	I0203 11:49:25.608079  173069 logs.go:282] 0 containers: []
	W0203 11:49:25.608092  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:25.608112  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:25.608180  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:25.640547  173069 cri.go:89] found id: ""
	I0203 11:49:25.640585  173069 logs.go:282] 0 containers: []
	W0203 11:49:25.640597  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:25.640606  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:25.640677  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:25.675981  173069 cri.go:89] found id: ""
	I0203 11:49:25.676011  173069 logs.go:282] 0 containers: []
	W0203 11:49:25.676021  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:25.676028  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:25.676099  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:25.711898  173069 cri.go:89] found id: ""
	I0203 11:49:25.711934  173069 logs.go:282] 0 containers: []
	W0203 11:49:25.711947  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:25.711954  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:25.712025  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:25.747075  173069 cri.go:89] found id: ""
	I0203 11:49:25.747105  173069 logs.go:282] 0 containers: []
	W0203 11:49:25.747116  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:25.747125  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:25.747193  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:25.780357  173069 cri.go:89] found id: ""
	I0203 11:49:25.780396  173069 logs.go:282] 0 containers: []
	W0203 11:49:25.780408  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:25.780416  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:25.780484  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:25.820871  173069 cri.go:89] found id: ""
	I0203 11:49:25.820902  173069 logs.go:282] 0 containers: []
	W0203 11:49:25.820910  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:25.820916  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:25.820979  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:25.855524  173069 cri.go:89] found id: ""
	I0203 11:49:25.855559  173069 logs.go:282] 0 containers: []
	W0203 11:49:25.855571  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:25.855586  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:25.855606  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:25.868990  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:25.869028  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:25.937058  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:25.937102  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:25.937118  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:26.013039  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:26.013094  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:26.047998  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:26.048037  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:28.602149  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:28.616009  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:28.616087  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:28.650122  173069 cri.go:89] found id: ""
	I0203 11:49:28.650153  173069 logs.go:282] 0 containers: []
	W0203 11:49:28.650163  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:28.650171  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:28.650231  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:28.684660  173069 cri.go:89] found id: ""
	I0203 11:49:28.684711  173069 logs.go:282] 0 containers: []
	W0203 11:49:28.684724  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:28.684732  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:28.684794  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:28.717957  173069 cri.go:89] found id: ""
	I0203 11:49:28.717987  173069 logs.go:282] 0 containers: []
	W0203 11:49:28.718013  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:28.718022  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:28.718078  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:28.755718  173069 cri.go:89] found id: ""
	I0203 11:49:28.755753  173069 logs.go:282] 0 containers: []
	W0203 11:49:28.755764  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:28.755773  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:28.755846  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:28.791532  173069 cri.go:89] found id: ""
	I0203 11:49:28.791565  173069 logs.go:282] 0 containers: []
	W0203 11:49:28.791575  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:28.791583  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:28.791649  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:28.832217  173069 cri.go:89] found id: ""
	I0203 11:49:28.832248  173069 logs.go:282] 0 containers: []
	W0203 11:49:28.832259  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:28.832267  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:28.832332  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:28.866856  173069 cri.go:89] found id: ""
	I0203 11:49:28.866889  173069 logs.go:282] 0 containers: []
	W0203 11:49:28.866900  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:28.866907  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:28.866974  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:28.901797  173069 cri.go:89] found id: ""
	I0203 11:49:28.901831  173069 logs.go:282] 0 containers: []
	W0203 11:49:28.901842  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:28.901855  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:28.901870  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:28.990979  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:28.991020  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:29.029714  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:29.029741  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:29.079623  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:29.079663  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:29.092838  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:29.092870  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:29.168272  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:31.668493  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:31.683288  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:31.683375  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:31.716718  173069 cri.go:89] found id: ""
	I0203 11:49:31.716751  173069 logs.go:282] 0 containers: []
	W0203 11:49:31.716764  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:31.716774  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:31.716838  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:31.751583  173069 cri.go:89] found id: ""
	I0203 11:49:31.751616  173069 logs.go:282] 0 containers: []
	W0203 11:49:31.751624  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:31.751634  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:31.751697  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:31.787362  173069 cri.go:89] found id: ""
	I0203 11:49:31.787399  173069 logs.go:282] 0 containers: []
	W0203 11:49:31.787410  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:31.787418  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:31.787490  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:31.830613  173069 cri.go:89] found id: ""
	I0203 11:49:31.830646  173069 logs.go:282] 0 containers: []
	W0203 11:49:31.830655  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:31.830661  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:31.830708  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:31.869504  173069 cri.go:89] found id: ""
	I0203 11:49:31.869606  173069 logs.go:282] 0 containers: []
	W0203 11:49:31.869625  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:31.869634  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:31.869707  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:31.906305  173069 cri.go:89] found id: ""
	I0203 11:49:31.906335  173069 logs.go:282] 0 containers: []
	W0203 11:49:31.906344  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:31.906364  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:31.906427  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:31.939685  173069 cri.go:89] found id: ""
	I0203 11:49:31.939723  173069 logs.go:282] 0 containers: []
	W0203 11:49:31.939733  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:31.939747  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:31.939814  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:31.974062  173069 cri.go:89] found id: ""
	I0203 11:49:31.974098  173069 logs.go:282] 0 containers: []
	W0203 11:49:31.974110  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:31.974123  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:31.974140  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:32.030465  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:32.030504  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:32.044813  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:32.044848  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:32.117380  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:32.117409  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:32.117458  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:32.194077  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:32.194119  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:34.735527  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:34.748976  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:34.749055  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:34.783285  173069 cri.go:89] found id: ""
	I0203 11:49:34.783319  173069 logs.go:282] 0 containers: []
	W0203 11:49:34.783331  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:34.783340  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:34.783410  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:34.826675  173069 cri.go:89] found id: ""
	I0203 11:49:34.826701  173069 logs.go:282] 0 containers: []
	W0203 11:49:34.826711  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:34.826725  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:34.826783  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:34.863516  173069 cri.go:89] found id: ""
	I0203 11:49:34.863544  173069 logs.go:282] 0 containers: []
	W0203 11:49:34.863552  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:34.863559  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:34.863625  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:34.900927  173069 cri.go:89] found id: ""
	I0203 11:49:34.900960  173069 logs.go:282] 0 containers: []
	W0203 11:49:34.900972  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:34.900980  173069 cri.go:54] lE0203 11:55:53.810147  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/no-preload-085638/client.crt: no such file or directory" logger="UnhandledError"
isting CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:34.901046  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:34.935771  173069 cri.go:89] found id: ""
	I0203 11:49:34.935805  173069 logs.go:282] 0 containers: []
	W0203 11:49:34.935818  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:34.935827  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:34.935902  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:34.970246  173069 cri.go:89] found id: ""
	I0203 11:49:34.970277  173069 logs.go:282] 0 containers: []
	W0203 11:49:34.970289  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:34.970296  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:34.970364  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:35.003768  173069 cri.go:89] found id: ""
	I0203 11:49:35.003797  173069 logs.go:282] 0 containers: []
	W0203 11:49:35.003804  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:35.003810  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:35.003886  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:35.036944  173069 cri.go:89] found id: ""
	I0203 11:49:35.036969  173069 logs.go:282] 0 containers: []
	W0203 11:49:35.036978  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:35.036989  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:35.037003  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:35.049553  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:35.049579  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:35.117735  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:35.117760  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:35.117778  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:35.193231  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:35.193275  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:35.234646  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:35.234674  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:37.786397  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:37.801296  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:37.801383  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:37.835993  173069 cri.go:89] found id: ""
	I0203 11:49:37.836026  173069 logs.go:282] 0 containers: []
	W0203 11:49:37.836037  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:37.836046  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:37.836115  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:37.872367  173069 cri.go:89] found id: ""
	I0203 11:49:37.872402  173069 logs.go:282] 0 containers: []
	W0203 11:49:37.872415  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:37.872423  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:37.872488  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:37.905993  173069 cri.go:89] found id: ""
	I0203 11:49:37.906042  173069 logs.go:282] 0 containers: []
	W0203 11:49:37.906053  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:37.906062  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:37.906132  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:37.942624  173069 cri.go:89] found id: ""
	I0203 11:49:37.942650  173069 logs.go:282] 0 containers: []
	W0203 11:49:37.942659  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:37.942665  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:37.942729  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:37.974869  173069 cri.go:89] found id: ""
	I0203 11:49:37.974896  173069 logs.go:282] 0 containers: []
	W0203 11:49:37.974906  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:37.974915  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:37.974981  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:38.011714  173069 cri.go:89] found id: ""
	I0203 11:49:38.011747  173069 logs.go:282] 0 containers: []
	W0203 11:49:38.011758  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:38.011767  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:38.011826  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:38.048377  173069 cri.go:89] found id: ""
	I0203 11:49:38.048410  173069 logs.go:282] 0 containers: []
	W0203 11:49:38.048419  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:38.048424  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:38.048477  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:38.081550  173069 cri.go:89] found id: ""
	I0203 11:49:38.081582  173069 logs.go:282] 0 containers: []
	W0203 11:49:38.081593  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:38.081606  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:38.081622  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:38.159461  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:38.159515  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:38.194365  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:38.194402  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:38.243829  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:38.243864  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:38.256563  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:38.256593  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:38.326858  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:40.827142  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:40.841956  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:40.842051  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:40.878335  173069 cri.go:89] found id: ""
	I0203 11:49:40.878378  173069 logs.go:282] 0 containers: []
	W0203 11:49:40.878389  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:40.878400  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:40.878470  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:40.912960  173069 cri.go:89] found id: ""
	I0203 11:49:40.912997  173069 logs.go:282] 0 containers: []
	W0203 11:49:40.913007  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:40.913016  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:40.913086  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:40.947374  173069 cri.go:89] found id: ""
	I0203 11:49:40.947409  173069 logs.go:282] 0 containers: []
	W0203 11:49:40.947417  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:40.947425  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:40.947479  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:40.983076  173069 cri.go:89] found id: ""
	I0203 11:49:40.983113  173069 logs.go:282] 0 containers: []
	W0203 11:49:40.983120  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:40.983127  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:40.983177  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:41.025693  173069 cri.go:89] found id: ""
	I0203 11:49:41.025728  173069 logs.go:282] 0 containers: []
	W0203 11:49:41.025738  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:41.025745  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:41.025801  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:41.074154  173069 cri.go:89] found id: ""
	I0203 11:49:41.074183  173069 logs.go:282] 0 containers: []
	W0203 11:49:41.074192  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:41.074199  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:41.074251  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:41.128957  173069 cri.go:89] found id: ""
	I0203 11:49:41.128989  173069 logs.go:282] 0 containers: []
	W0203 11:49:41.129001  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:41.129008  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:41.129061  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:41.175762  173069 cri.go:89] found id: ""
	I0203 11:49:41.175799  173069 logs.go:282] 0 containers: []
	W0203 11:49:41.175813  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:41.175828  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:41.175842  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:41.249927  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:41.249969  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:41.288919  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:41.288952  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:41.342977  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:41.343022  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:41.357265  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:41.357296  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:41.425081  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:43.926163  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:43.939365  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:43.939434  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:43.974603  173069 cri.go:89] found id: ""
	I0203 11:49:43.974632  173069 logs.go:282] 0 containers: []
	W0203 11:49:43.974639  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:43.974646  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:43.974716  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:44.007871  173069 cri.go:89] found id: ""
	I0203 11:49:44.007905  173069 logs.go:282] 0 containers: []
	W0203 11:49:44.007917  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:44.007925  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:44.007987  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:44.041232  173069 cri.go:89] found id: ""
	I0203 11:49:44.041260  173069 logs.go:282] 0 containers: []
	W0203 11:49:44.041269  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:44.041275  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:44.041327  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:44.074259  173069 cri.go:89] found id: ""
	I0203 11:49:44.074296  173069 logs.go:282] 0 containers: []
	W0203 11:49:44.074307  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:44.074315  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:44.074381  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:44.105848  173069 cri.go:89] found id: ""
	I0203 11:49:44.105876  173069 logs.go:282] 0 containers: []
	W0203 11:49:44.105884  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:44.105890  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:44.105956  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:44.138240  173069 cri.go:89] found id: ""
	I0203 11:49:44.138279  173069 logs.go:282] 0 containers: []
	W0203 11:49:44.138288  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:44.138295  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:44.138346  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:44.177897  173069 cri.go:89] found id: ""
	I0203 11:49:44.177928  173069 logs.go:282] 0 containers: []
	W0203 11:49:44.177936  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:44.177942  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:44.177992  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:44.213741  173069 cri.go:89] found id: ""
	I0203 11:49:44.213767  173069 logs.go:282] 0 containers: []
	W0203 11:49:44.213775  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:44.213785  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:44.213800  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:44.266983  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:44.267018  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:44.281949  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:44.281981  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:44.354678  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:44.354700  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:44.354712  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:44.431826  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:44.431875  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:46.972628  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:46.986893  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:46.986953  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:47.025312  173069 cri.go:89] found id: ""
	I0203 11:49:47.025345  173069 logs.go:282] 0 containers: []
	W0203 11:49:47.025356  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:47.025363  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:47.025415  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:47.061759  173069 cri.go:89] found id: ""
	I0203 11:49:47.061790  173069 logs.go:282] 0 containers: []
	W0203 11:49:47.061801  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:47.061809  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:47.061869  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:47.093633  173069 cri.go:89] found id: ""
	I0203 11:49:47.093660  173069 logs.go:282] 0 containers: []
	W0203 11:49:47.093668  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:47.093674  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:47.093723  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:47.124926  173069 cri.go:89] found id: ""
	I0203 11:49:47.124955  173069 logs.go:282] 0 containers: []
	W0203 11:49:47.124965  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:47.124972  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:47.125022  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:47.158978  173069 cri.go:89] found id: ""
	I0203 11:49:47.159008  173069 logs.go:282] 0 containers: []
	W0203 11:49:47.159018  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:47.159026  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:47.159095  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:47.194818  173069 cri.go:89] found id: ""
	I0203 11:49:47.194854  173069 logs.go:282] 0 containers: []
	W0203 11:49:47.194866  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:47.194876  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:47.194938  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:47.227275  173069 cri.go:89] found id: ""
	I0203 11:49:47.227300  173069 logs.go:282] 0 containers: []
	W0203 11:49:47.227308  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:47.227313  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:47.227385  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:47.258979  173069 cri.go:89] found id: ""
	I0203 11:49:47.259006  173069 logs.go:282] 0 containers: []
	W0203 11:49:47.259014  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:47.259025  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:47.259037  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:47.337419  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:47.337459  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:47.375192  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:47.375223  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:47.425138  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:47.425181  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:47.437976  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:47.438020  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:47.506355  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:50.007373  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:50.020358  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:50.020432  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:50.053941  173069 cri.go:89] found id: ""
	I0203 11:49:50.053967  173069 logs.go:282] 0 containers: []
	W0203 11:49:50.053975  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:50.053981  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:50.054041  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:50.092666  173069 cri.go:89] found id: ""
	I0203 11:49:50.092702  173069 logs.go:282] 0 containers: []
	W0203 11:49:50.092711  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:50.092716  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:50.092780  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:50.129016  173069 cri.go:89] found id: ""
	I0203 11:49:50.129050  173069 logs.go:282] 0 containers: []
	W0203 11:49:50.129058  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:50.129065  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:50.129123  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:50.170274  173069 cri.go:89] found id: ""
	I0203 11:49:50.170311  173069 logs.go:282] 0 containers: []
	W0203 11:49:50.170323  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:50.170333  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:50.170407  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:50.211451  173069 cri.go:89] found id: ""
	I0203 11:49:50.211486  173069 logs.go:282] 0 containers: []
	W0203 11:49:50.211495  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:50.211501  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:50.211557  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:50.244673  173069 cri.go:89] found id: ""
	I0203 11:49:50.244707  173069 logs.go:282] 0 containers: []
	W0203 11:49:50.244718  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:50.244726  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:50.244793  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:50.279165  173069 cri.go:89] found id: ""
	I0203 11:49:50.279190  173069 logs.go:282] 0 containers: []
	W0203 11:49:50.279198  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:50.279203  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:50.279252  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:50.314471  173069 cri.go:89] found id: ""
	I0203 11:49:50.314496  173069 logs.go:282] 0 containers: []
	W0203 11:49:50.314504  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:50.314514  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:50.314530  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:50.327435  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:50.327466  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:50.401632  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:50.401656  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:50.401675  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:50.473268  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:50.473307  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:50.512786  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:50.512816  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:53.065513  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:53.088349  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:53.088437  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:53.130309  173069 cri.go:89] found id: ""
	I0203 11:49:53.130350  173069 logs.go:282] 0 containers: []
	W0203 11:49:53.130360  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:53.130366  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:53.130428  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:53.167056  173069 cri.go:89] found id: ""
	I0203 11:49:53.167087  173069 logs.go:282] 0 containers: []
	W0203 11:49:53.167095  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:53.167101  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:53.167152  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:53.207272  173069 cri.go:89] found id: ""
	I0203 11:49:53.207307  173069 logs.go:282] 0 containers: []
	W0203 11:49:53.207318  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:53.207327  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:53.207401  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:53.241440  173069 cri.go:89] found id: ""
	I0203 11:49:53.241473  173069 logs.go:282] 0 containers: []
	W0203 11:49:53.241499  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:53.241508  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:53.241584  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:53.277298  173069 cri.go:89] found id: ""
	I0203 11:49:53.277331  173069 logs.go:282] 0 containers: []
	W0203 11:49:53.277343  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:53.277361  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:53.277422  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:53.312388  173069 cri.go:89] found id: ""
	I0203 11:49:53.312415  173069 logs.go:282] 0 containers: []
	W0203 11:49:53.312424  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:53.312432  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:53.312482  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:53.347118  173069 cri.go:89] found id: ""
	I0203 11:49:53.347152  173069 logs.go:282] 0 containers: []
	W0203 11:49:53.347179  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:53.347188  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:53.347253  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:53.380009  173069 cri.go:89] found id: ""
	I0203 11:49:53.380042  173069 logs.go:282] 0 containers: []
	W0203 11:49:53.380051  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:53.380062  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:53.380074  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:53.462318  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:53.462360  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:53.502810  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:53.502856  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:53.552932  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:53.552972  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:53.565943  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:53.565983  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:53.643001  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:56.143207  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:56.158263  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:56.158347  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:56.194520  173069 cri.go:89] found id: ""
	I0203 11:49:56.194547  173069 logs.go:282] 0 containers: []
	W0203 11:49:56.194560  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:56.194569  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:56.194647  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:56.230065  173069 cri.go:89] found id: ""
	I0203 11:49:56.230102  173069 logs.go:282] 0 containers: []
	W0203 11:49:56.230129  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:56.230138  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:56.230208  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:56.269786  173069 cri.go:89] found id: ""
	I0203 11:49:56.269825  173069 logs.go:282] 0 containers: []
	W0203 11:49:56.269836  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:56.269844  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:56.269909  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:56.304051  173069 cri.go:89] found id: ""
	I0203 11:49:56.304079  173069 logs.go:282] 0 containers: []
	W0203 11:49:56.304090  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:56.304098  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:56.304174  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:56.337630  173069 cri.go:89] found id: ""
	I0203 11:49:56.337659  173069 logs.go:282] 0 containers: []
	W0203 11:49:56.337667  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:56.337674  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:56.337730  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:56.376718  173069 cri.go:89] found id: ""
	I0203 11:49:56.376744  173069 logs.go:282] 0 containers: []
	W0203 11:49:56.376752  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:56.376762  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:56.376812  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:56.414623  173069 cri.go:89] found id: ""
	I0203 11:49:56.414658  173069 logs.go:282] 0 containers: []
	W0203 11:49:56.414669  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:56.414678  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:56.414748  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:56.448911  173069 cri.go:89] found id: ""
	I0203 11:49:56.448947  173069 logs.go:282] 0 containers: []
	W0203 11:49:56.448958  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:56.448973  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:56.448991  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:56.461854  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:56.461883  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:56.531597  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:56.531618  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:56.531635  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:56.605879  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:56.605921  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:56.644903  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:56.644941  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:49:59.198333  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:49:59.211850  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:49:59.211917  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:49:59.247994  173069 cri.go:89] found id: ""
	I0203 11:49:59.248042  173069 logs.go:282] 0 containers: []
	W0203 11:49:59.248055  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:49:59.248065  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:49:59.248149  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:49:59.284864  173069 cri.go:89] found id: ""
	I0203 11:49:59.284896  173069 logs.go:282] 0 containers: []
	W0203 11:49:59.284905  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:49:59.284911  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:49:59.284961  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:49:59.320065  173069 cri.go:89] found id: ""
	I0203 11:49:59.320093  173069 logs.go:282] 0 containers: []
	W0203 11:49:59.320101  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:49:59.320108  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:49:59.320170  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:49:59.354650  173069 cri.go:89] found id: ""
	I0203 11:49:59.354675  173069 logs.go:282] 0 containers: []
	W0203 11:49:59.354682  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:49:59.354688  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:49:59.354740  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:49:59.392382  173069 cri.go:89] found id: ""
	I0203 11:49:59.392421  173069 logs.go:282] 0 containers: []
	W0203 11:49:59.392431  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:49:59.392438  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:49:59.392495  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:49:59.427064  173069 cri.go:89] found id: ""
	I0203 11:49:59.427092  173069 logs.go:282] 0 containers: []
	W0203 11:49:59.427100  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:49:59.427106  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:49:59.427163  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:49:59.468233  173069 cri.go:89] found id: ""
	I0203 11:49:59.468268  173069 logs.go:282] 0 containers: []
	W0203 11:49:59.468278  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:49:59.468290  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:49:59.468354  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:49:59.508084  173069 cri.go:89] found id: ""
	I0203 11:49:59.508134  173069 logs.go:282] 0 containers: []
	W0203 11:49:59.508149  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:49:59.508167  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:49:59.508193  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:49:59.524759  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:49:59.524798  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:49:59.604961  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:49:59.604993  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:49:59.605012  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:49:59.688029  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:49:59.688076  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:49:59.727474  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:49:59.727509  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:02.284336  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:02.297541  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:02.297632  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:02.330263  173069 cri.go:89] found id: ""
	I0203 11:50:02.330291  173069 logs.go:282] 0 containers: []
	W0203 11:50:02.330305  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:02.330312  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:02.330370  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:02.366718  173069 cri.go:89] found id: ""
	I0203 11:50:02.366745  173069 logs.go:282] 0 containers: []
	W0203 11:50:02.366753  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:02.366759  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:02.366816  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:02.399793  173069 cri.go:89] found id: ""
	I0203 11:50:02.399828  173069 logs.go:282] 0 containers: []
	W0203 11:50:02.399858  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:02.399867  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:02.399937  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:02.438237  173069 cri.go:89] found id: ""
	I0203 11:50:02.438268  173069 logs.go:282] 0 containers: []
	W0203 11:50:02.438279  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:02.438287  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:02.438348  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:02.477887  173069 cri.go:89] found id: ""
	I0203 11:50:02.477921  173069 logs.go:282] 0 containers: []
	W0203 11:50:02.477930  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:02.477936  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:02.477987  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:02.516496  173069 cri.go:89] found id: ""
	I0203 11:50:02.516523  173069 logs.go:282] 0 containers: []
	W0203 11:50:02.516534  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:02.516543  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:02.516608  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:02.555703  173069 cri.go:89] found id: ""
	I0203 11:50:02.555739  173069 logs.go:282] 0 containers: []
	W0203 11:50:02.555756  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:02.555765  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:02.555834  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:02.590436  173069 cri.go:89] found id: ""
	I0203 11:50:02.590463  173069 logs.go:282] 0 containers: []
	W0203 11:50:02.590471  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:02.590481  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:02.590550  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:02.629718  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:02.629747  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:02.683471  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:02.683517  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:02.697969  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:02.698027  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:02.769499  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:02.769526  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:02.769540  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:05.350166  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:05.363371  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:05.363568  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:05.402446  173069 cri.go:89] found id: ""
	I0203 11:50:05.402496  173069 logs.go:282] 0 containers: []
	W0203 11:50:05.402507  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:05.402515  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:05.402579  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:05.440694  173069 cri.go:89] found id: ""
	I0203 11:50:05.440727  173069 logs.go:282] 0 containers: []
	W0203 11:50:05.440739  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:05.440749  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:05.440816  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:05.481373  173069 cri.go:89] found id: ""
	I0203 11:50:05.481406  173069 logs.go:282] 0 containers: []
	W0203 11:50:05.481419  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:05.481428  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:05.481496  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:05.518731  173069 cri.go:89] found id: ""
	I0203 11:50:05.518767  173069 logs.go:282] 0 containers: []
	W0203 11:50:05.518778  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:05.518787  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:05.518855  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:05.555716  173069 cri.go:89] found id: ""
	I0203 11:50:05.555751  173069 logs.go:282] 0 containers: []
	W0203 11:50:05.555762  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:05.555769  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:05.555839  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:05.592761  173069 cri.go:89] found id: ""
	I0203 11:50:05.592789  173069 logs.go:282] 0 containers: []
	W0203 11:50:05.592797  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:05.592803  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:05.592863  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:05.628252  173069 cri.go:89] found id: ""
	I0203 11:50:05.628280  173069 logs.go:282] 0 containers: []
	W0203 11:50:05.628288  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:05.628294  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:05.628348  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:05.673133  173069 cri.go:89] found id: ""
	I0203 11:50:05.673170  173069 logs.go:282] 0 containers: []
	W0203 11:50:05.673182  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:05.673194  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:05.673210  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:05.727557  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:05.727623  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:05.742922  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:05.742963  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:05.816731  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:05.816756  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:05.816772  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:05.903874  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:05.903925  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:08.450205  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:08.463331  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:08.463431  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:08.503996  173069 cri.go:89] found id: ""
	I0203 11:50:08.504028  173069 logs.go:282] 0 containers: []
	W0203 11:50:08.504037  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:08.504043  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:08.504099  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:08.543097  173069 cri.go:89] found id: ""
	I0203 11:50:08.543125  173069 logs.go:282] 0 containers: []
	W0203 11:50:08.543133  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:08.543141  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:08.543197  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:08.578511  173069 cri.go:89] found id: ""
	I0203 11:50:08.578548  173069 logs.go:282] 0 containers: []
	W0203 11:50:08.578560  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:08.578569  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:08.578641  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:08.614642  173069 cri.go:89] found id: ""
	I0203 11:50:08.614673  173069 logs.go:282] 0 containers: []
	W0203 11:50:08.614681  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:08.614688  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:08.614743  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:08.651609  173069 cri.go:89] found id: ""
	I0203 11:50:08.651638  173069 logs.go:282] 0 containers: []
	W0203 11:50:08.651645  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:08.651651  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:08.651699  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:08.685167  173069 cri.go:89] found id: ""
	I0203 11:50:08.685197  173069 logs.go:282] 0 containers: []
	W0203 11:50:08.685204  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:08.685210  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:08.685273  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:08.719501  173069 cri.go:89] found id: ""
	I0203 11:50:08.719528  173069 logs.go:282] 0 containers: []
	W0203 11:50:08.719536  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:08.719542  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:08.719590  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:08.755156  173069 cri.go:89] found id: ""
	I0203 11:50:08.755190  173069 logs.go:282] 0 containers: []
	W0203 11:50:08.755198  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:08.755209  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:08.755222  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:08.810368  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:08.810413  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:08.824147  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:08.824175  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:08.893404  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:08.893437  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:08.893452  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:08.971654  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:08.971695  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:11.513036  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:11.526349  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:11.526424  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:11.565127  173069 cri.go:89] found id: ""
	I0203 11:50:11.565155  173069 logs.go:282] 0 containers: []
	W0203 11:50:11.565163  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:11.565170  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:11.565221  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:11.604580  173069 cri.go:89] found id: ""
	I0203 11:50:11.604606  173069 logs.go:282] 0 containers: []
	W0203 11:50:11.604615  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:11.604637  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:11.604701  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:11.640064  173069 cri.go:89] found id: ""
	I0203 11:50:11.640098  173069 logs.go:282] 0 containers: []
	W0203 11:50:11.640110  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:11.640119  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:11.640187  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:11.686350  173069 cri.go:89] found id: ""
	I0203 11:50:11.686446  173069 logs.go:282] 0 containers: []
	W0203 11:50:11.686464  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:11.686476  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:11.686548  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:11.723497  173069 cri.go:89] found id: ""
	I0203 11:50:11.723526  173069 logs.go:282] 0 containers: []
	W0203 11:50:11.723535  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:11.723541  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:11.723591  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:11.763734  173069 cri.go:89] found id: ""
	I0203 11:50:11.763771  173069 logs.go:282] 0 containers: []
	W0203 11:50:11.763785  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:11.763794  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:11.763864  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:11.812264  173069 cri.go:89] found id: ""
	I0203 11:50:11.812305  173069 logs.go:282] 0 containers: []
	W0203 11:50:11.812318  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:11.812326  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:11.812423  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:11.852717  173069 cri.go:89] found id: ""
	I0203 11:50:11.852744  173069 logs.go:282] 0 containers: []
	W0203 11:50:11.852753  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:11.852764  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:11.852781  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:11.914336  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:11.914376  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:11.930120  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:11.930170  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:12.008964  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:12.008995  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:12.009012  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:12.118311  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:12.118360  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:14.680967  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:14.694061  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:14.694130  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:14.729802  173069 cri.go:89] found id: ""
	I0203 11:50:14.729828  173069 logs.go:282] 0 containers: []
	W0203 11:50:14.729836  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:14.729842  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:14.729910  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:14.765262  173069 cri.go:89] found id: ""
	I0203 11:50:14.765292  173069 logs.go:282] 0 containers: []
	W0203 11:50:14.765300  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:14.765306  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:14.765371  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:14.801221  173069 cri.go:89] found id: ""
	I0203 11:50:14.801260  173069 logs.go:282] 0 containers: []
	W0203 11:50:14.801272  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:14.801279  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:14.801336  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:14.835910  173069 cri.go:89] found id: ""
	I0203 11:50:14.835941  173069 logs.go:282] 0 containers: []
	W0203 11:50:14.835950  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:14.835956  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:14.836020  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:14.873174  173069 cri.go:89] found id: ""
	I0203 11:50:14.873203  173069 logs.go:282] 0 containers: []
	W0203 11:50:14.873211  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:14.873217  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:14.873273  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:14.920578  173069 cri.go:89] found id: ""
	I0203 11:50:14.920613  173069 logs.go:282] 0 containers: []
	W0203 11:50:14.920627  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:14.920635  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:14.920708  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:14.969692  173069 cri.go:89] found id: ""
	I0203 11:50:14.969732  173069 logs.go:282] 0 containers: []
	W0203 11:50:14.969743  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:14.969754  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:14.969824  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:15.011569  173069 cri.go:89] found id: ""
	I0203 11:50:15.011601  173069 logs.go:282] 0 containers: []
	W0203 11:50:15.011612  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:15.011624  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:15.011636  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:15.067045  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:15.067111  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:15.084215  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:15.084254  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:15.159744  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:15.159782  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:15.159798  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:15.245199  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:15.245245  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:17.790388  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:17.803330  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:17.803392  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:17.842135  173069 cri.go:89] found id: ""
	I0203 11:50:17.842164  173069 logs.go:282] 0 containers: []
	W0203 11:50:17.842172  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:17.842184  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:17.842236  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:17.875591  173069 cri.go:89] found id: ""
	I0203 11:50:17.875625  173069 logs.go:282] 0 containers: []
	W0203 11:50:17.875638  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:17.875646  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:17.875711  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:17.908937  173069 cri.go:89] found id: ""
	I0203 11:50:17.908971  173069 logs.go:282] 0 containers: []
	W0203 11:50:17.908983  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:17.908991  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:17.909057  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:17.943218  173069 cri.go:89] found id: ""
	I0203 11:50:17.943246  173069 logs.go:282] 0 containers: []
	W0203 11:50:17.943257  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:17.943265  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:17.943337  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:17.977222  173069 cri.go:89] found id: ""
	I0203 11:50:17.977253  173069 logs.go:282] 0 containers: []
	W0203 11:50:17.977264  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:17.977272  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:17.977337  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:18.012474  173069 cri.go:89] found id: ""
	I0203 11:50:18.012501  173069 logs.go:282] 0 containers: []
	W0203 11:50:18.012513  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:18.012522  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:18.012579  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:18.050368  173069 cri.go:89] found id: ""
	I0203 11:50:18.050402  173069 logs.go:282] 0 containers: []
	W0203 11:50:18.050413  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:18.050421  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:18.050482  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:18.085234  173069 cri.go:89] found id: ""
	I0203 11:50:18.085263  173069 logs.go:282] 0 containers: []
	W0203 11:50:18.085271  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:18.085283  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:18.085295  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:18.098271  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:18.098301  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:18.169362  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:18.169384  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:18.169396  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:18.248942  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:18.248986  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:18.290124  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:18.290154  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:20.840172  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:20.853356  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:20.853426  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:20.886275  173069 cri.go:89] found id: ""
	I0203 11:50:20.886304  173069 logs.go:282] 0 containers: []
	W0203 11:50:20.886316  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:20.886323  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:20.886389  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:20.919342  173069 cri.go:89] found id: ""
	I0203 11:50:20.919372  173069 logs.go:282] 0 containers: []
	W0203 11:50:20.919380  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:20.919386  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:20.919437  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:20.952581  173069 cri.go:89] found id: ""
	I0203 11:50:20.952613  173069 logs.go:282] 0 containers: []
	W0203 11:50:20.952625  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:20.952636  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:20.952701  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:20.987235  173069 cri.go:89] found id: ""
	I0203 11:50:20.987266  173069 logs.go:282] 0 containers: []
	W0203 11:50:20.987277  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:20.987283  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:20.987335  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:21.021946  173069 cri.go:89] found id: ""
	I0203 11:50:21.021978  173069 logs.go:282] 0 containers: []
	W0203 11:50:21.021991  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:21.022015  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:21.022074  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:21.058778  173069 cri.go:89] found id: ""
	I0203 11:50:21.058803  173069 logs.go:282] 0 containers: []
	W0203 11:50:21.058812  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:21.058818  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:21.058885  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:21.093629  173069 cri.go:89] found id: ""
	I0203 11:50:21.093665  173069 logs.go:282] 0 containers: []
	W0203 11:50:21.093677  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:21.093685  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:21.093739  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:21.126566  173069 cri.go:89] found id: ""
	I0203 11:50:21.126591  173069 logs.go:282] 0 containers: []
	W0203 11:50:21.126599  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:21.126610  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:21.126626  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:21.139102  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:21.139131  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:21.209379  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:21.209403  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:21.209417  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:21.284145  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:21.284186  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:21.342268  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:21.342304  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:23.931096  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:23.945615  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:23.945691  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:23.985596  173069 cri.go:89] found id: ""
	I0203 11:50:23.985634  173069 logs.go:282] 0 containers: []
	W0203 11:50:23.985646  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:23.985654  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:23.985716  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:24.025317  173069 cri.go:89] found id: ""
	I0203 11:50:24.025349  173069 logs.go:282] 0 containers: []
	W0203 11:50:24.025361  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:24.025369  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:24.025457  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:24.069918  173069 cri.go:89] found id: ""
	I0203 11:50:24.069953  173069 logs.go:282] 0 containers: []
	W0203 11:50:24.069965  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:24.069974  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:24.070060  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:24.106564  173069 cri.go:89] found id: ""
	I0203 11:50:24.106595  173069 logs.go:282] 0 containers: []
	W0203 11:50:24.106606  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:24.106613  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:24.106677  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:24.148888  173069 cri.go:89] found id: ""
	I0203 11:50:24.148921  173069 logs.go:282] 0 containers: []
	W0203 11:50:24.148932  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:24.148941  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:24.149005  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:24.194604  173069 cri.go:89] found id: ""
	I0203 11:50:24.194635  173069 logs.go:282] 0 containers: []
	W0203 11:50:24.194646  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:24.194654  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:24.194715  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:24.234907  173069 cri.go:89] found id: ""
	I0203 11:50:24.234944  173069 logs.go:282] 0 containers: []
	W0203 11:50:24.234956  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:24.234964  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:24.235031  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:24.275273  173069 cri.go:89] found id: ""
	I0203 11:50:24.275366  173069 logs.go:282] 0 containers: []
	W0203 11:50:24.275382  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:24.275395  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:24.275411  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:24.368669  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:24.368695  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:24.368713  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:24.467439  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:24.467483  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:24.513279  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:24.513319  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:24.579974  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:24.580015  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:27.097349  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:27.113924  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:27.114040  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:27.153720  173069 cri.go:89] found id: ""
	I0203 11:50:27.153751  173069 logs.go:282] 0 containers: []
	W0203 11:50:27.153761  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:27.153770  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:27.153839  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:27.225499  173069 cri.go:89] found id: ""
	I0203 11:50:27.225532  173069 logs.go:282] 0 containers: []
	W0203 11:50:27.225545  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:27.225553  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:27.225618  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:27.262240  173069 cri.go:89] found id: ""
	I0203 11:50:27.262278  173069 logs.go:282] 0 containers: []
	W0203 11:50:27.262289  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:27.262298  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:27.262367  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:27.300412  173069 cri.go:89] found id: ""
	I0203 11:50:27.300445  173069 logs.go:282] 0 containers: []
	W0203 11:50:27.300457  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:27.300464  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:27.300518  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:27.337202  173069 cri.go:89] found id: ""
	I0203 11:50:27.337235  173069 logs.go:282] 0 containers: []
	W0203 11:50:27.337247  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:27.337255  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:27.337316  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:27.373538  173069 cri.go:89] found id: ""
	I0203 11:50:27.373571  173069 logs.go:282] 0 containers: []
	W0203 11:50:27.373580  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:27.373587  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:27.373637  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:27.414814  173069 cri.go:89] found id: ""
	I0203 11:50:27.414850  173069 logs.go:282] 0 containers: []
	W0203 11:50:27.414863  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:27.414872  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:27.414941  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:27.448011  173069 cri.go:89] found id: ""
	I0203 11:50:27.448045  173069 logs.go:282] 0 containers: []
	W0203 11:50:27.448055  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:27.448070  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:27.448087  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:27.517316  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:27.517349  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:27.517367  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:27.596479  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:27.596520  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:27.641703  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:27.641743  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:27.700504  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:27.700543  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:30.218875  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:30.232285  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:30.232356  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:30.269032  173069 cri.go:89] found id: ""
	I0203 11:50:30.269067  173069 logs.go:282] 0 containers: []
	W0203 11:50:30.269081  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:30.269088  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:30.269148  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:30.305354  173069 cri.go:89] found id: ""
	I0203 11:50:30.305390  173069 logs.go:282] 0 containers: []
	W0203 11:50:30.305405  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:30.305413  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:30.305480  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:30.351850  173069 cri.go:89] found id: ""
	I0203 11:50:30.351884  173069 logs.go:282] 0 containers: []
	W0203 11:50:30.351896  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:30.351904  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:30.351966  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:30.396098  173069 cri.go:89] found id: ""
	I0203 11:50:30.396138  173069 logs.go:282] 0 containers: []
	W0203 11:50:30.396150  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:30.396162  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:30.396233  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:30.430296  173069 cri.go:89] found id: ""
	I0203 11:50:30.430323  173069 logs.go:282] 0 containers: []
	W0203 11:50:30.430331  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:30.430338  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:30.430397  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:30.466526  173069 cri.go:89] found id: ""
	I0203 11:50:30.466562  173069 logs.go:282] 0 containers: []
	W0203 11:50:30.466574  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:30.466583  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:30.466640  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:30.502533  173069 cri.go:89] found id: ""
	I0203 11:50:30.502558  173069 logs.go:282] 0 containers: []
	W0203 11:50:30.502566  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:30.502573  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:30.502627  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:30.536190  173069 cri.go:89] found id: ""
	I0203 11:50:30.536220  173069 logs.go:282] 0 containers: []
	W0203 11:50:30.536231  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:30.536250  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:30.536267  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:30.616070  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:30.616111  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:30.655951  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:30.655982  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:30.705274  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:30.705314  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:30.718702  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:30.718736  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:30.790794  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:33.291434  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:33.304540  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:33.304626  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:33.345859  173069 cri.go:89] found id: ""
	I0203 11:50:33.345893  173069 logs.go:282] 0 containers: []
	W0203 11:50:33.345905  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:33.345913  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:33.345981  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:33.389744  173069 cri.go:89] found id: ""
	I0203 11:50:33.389779  173069 logs.go:282] 0 containers: []
	W0203 11:50:33.389791  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:33.389799  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:33.389866  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:33.430483  173069 cri.go:89] found id: ""
	I0203 11:50:33.430512  173069 logs.go:282] 0 containers: []
	W0203 11:50:33.430520  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:33.430526  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:33.430590  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:33.465492  173069 cri.go:89] found id: ""
	I0203 11:50:33.465526  173069 logs.go:282] 0 containers: []
	W0203 11:50:33.465536  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:33.465544  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:33.465619  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:33.500541  173069 cri.go:89] found id: ""
	I0203 11:50:33.500571  173069 logs.go:282] 0 containers: []
	W0203 11:50:33.500579  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:33.500586  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:33.500663  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:33.545703  173069 cri.go:89] found id: ""
	I0203 11:50:33.545737  173069 logs.go:282] 0 containers: []
	W0203 11:50:33.545748  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:33.545756  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:33.545820  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:33.584267  173069 cri.go:89] found id: ""
	I0203 11:50:33.584299  173069 logs.go:282] 0 containers: []
	W0203 11:50:33.584310  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:33.584319  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:33.584394  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:33.621799  173069 cri.go:89] found id: ""
	I0203 11:50:33.621830  173069 logs.go:282] 0 containers: []
	W0203 11:50:33.621841  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:33.621855  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:33.621870  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:33.675258  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:33.675295  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:33.692337  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:33.692367  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:33.765027  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:33.765058  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:33.765074  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:33.844763  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:33.844801  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:36.394112  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:36.407425  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:36.407492  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:36.442875  173069 cri.go:89] found id: ""
	I0203 11:50:36.442908  173069 logs.go:282] 0 containers: []
	W0203 11:50:36.442919  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:36.442932  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:36.442992  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:36.474430  173069 cri.go:89] found id: ""
	I0203 11:50:36.474461  173069 logs.go:282] 0 containers: []
	W0203 11:50:36.474472  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:36.474480  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:36.474545  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:36.505264  173069 cri.go:89] found id: ""
	I0203 11:50:36.505290  173069 logs.go:282] 0 containers: []
	W0203 11:50:36.505298  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:36.505305  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:36.505365  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:36.540951  173069 cri.go:89] found id: ""
	I0203 11:50:36.540978  173069 logs.go:282] 0 containers: []
	W0203 11:50:36.540987  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:36.540993  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:36.541041  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:36.573450  173069 cri.go:89] found id: ""
	I0203 11:50:36.573481  173069 logs.go:282] 0 containers: []
	W0203 11:50:36.573498  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:36.573506  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:36.573569  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:36.604310  173069 cri.go:89] found id: ""
	I0203 11:50:36.604341  173069 logs.go:282] 0 containers: []
	W0203 11:50:36.604351  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:36.604359  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:36.604425  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:36.637426  173069 cri.go:89] found id: ""
	I0203 11:50:36.637460  173069 logs.go:282] 0 containers: []
	W0203 11:50:36.637473  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:36.637481  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:36.637542  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:36.670136  173069 cri.go:89] found id: ""
	I0203 11:50:36.670166  173069 logs.go:282] 0 containers: []
	W0203 11:50:36.670178  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:36.670191  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:36.670208  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:36.738050  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:36.738077  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:36.738091  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:36.831428  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:36.831478  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:36.878699  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:36.878735  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:36.950949  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:36.950990  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:39.468338  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:39.486677  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:39.486754  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:39.521548  173069 cri.go:89] found id: ""
	I0203 11:50:39.521582  173069 logs.go:282] 0 containers: []
	W0203 11:50:39.521594  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:39.521602  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:39.521669  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:39.565058  173069 cri.go:89] found id: ""
	I0203 11:50:39.565093  173069 logs.go:282] 0 containers: []
	W0203 11:50:39.565106  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:39.565114  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:39.565179  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:39.600819  173069 cri.go:89] found id: ""
	I0203 11:50:39.600850  173069 logs.go:282] 0 containers: []
	W0203 11:50:39.600862  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:39.600870  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:39.600935  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:39.636885  173069 cri.go:89] found id: ""
	I0203 11:50:39.636926  173069 logs.go:282] 0 containers: []
	W0203 11:50:39.636939  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:39.636948  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:39.637012  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:39.677237  173069 cri.go:89] found id: ""
	I0203 11:50:39.677268  173069 logs.go:282] 0 containers: []
	W0203 11:50:39.677280  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:39.677288  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:39.677347  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:39.716496  173069 cri.go:89] found id: ""
	I0203 11:50:39.716531  173069 logs.go:282] 0 containers: []
	W0203 11:50:39.716543  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:39.716552  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:39.716618  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:39.750035  173069 cri.go:89] found id: ""
	I0203 11:50:39.750068  173069 logs.go:282] 0 containers: []
	W0203 11:50:39.750078  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:39.750084  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:39.750155  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:39.796330  173069 cri.go:89] found id: ""
	I0203 11:50:39.796364  173069 logs.go:282] 0 containers: []
	W0203 11:50:39.796373  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:39.796384  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:39.796395  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:39.845630  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:39.845671  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:39.858903  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:39.858934  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:39.932737  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:39.932760  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:39.932776  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:40.021003  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:40.021041  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:42.570251  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:42.584423  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:42.584506  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:42.621551  173069 cri.go:89] found id: ""
	I0203 11:50:42.621585  173069 logs.go:282] 0 containers: []
	W0203 11:50:42.621602  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:42.621611  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:42.621677  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:42.655708  173069 cri.go:89] found id: ""
	I0203 11:50:42.655741  173069 logs.go:282] 0 containers: []
	W0203 11:50:42.655754  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:42.655762  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:42.655836  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:42.694819  173069 cri.go:89] found id: ""
	I0203 11:50:42.694854  173069 logs.go:282] 0 containers: []
	W0203 11:50:42.694865  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:42.694876  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:42.694938  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:42.727360  173069 cri.go:89] found id: ""
	I0203 11:50:42.727389  173069 logs.go:282] 0 containers: []
	W0203 11:50:42.727399  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:42.727407  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:42.727472  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:42.768148  173069 cri.go:89] found id: ""
	I0203 11:50:42.768184  173069 logs.go:282] 0 containers: []
	W0203 11:50:42.768194  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:42.768202  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:42.768261  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:42.810609  173069 cri.go:89] found id: ""
	I0203 11:50:42.810638  173069 logs.go:282] 0 containers: []
	W0203 11:50:42.810648  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:42.810657  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:42.810719  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:42.851709  173069 cri.go:89] found id: ""
	I0203 11:50:42.851738  173069 logs.go:282] 0 containers: []
	W0203 11:50:42.851750  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:42.851758  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:42.851821  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:42.889805  173069 cri.go:89] found id: ""
	I0203 11:50:42.889837  173069 logs.go:282] 0 containers: []
	W0203 11:50:42.889848  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:42.889872  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:42.889897  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:42.946318  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:42.946353  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:42.963133  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:42.963179  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:43.053089  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:43.053120  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:43.053143  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:43.139164  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:43.139216  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:45.676602  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:45.690902  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:45.690970  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:45.729082  173069 cri.go:89] found id: ""
	I0203 11:50:45.729108  173069 logs.go:282] 0 containers: []
	W0203 11:50:45.729115  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:45.729121  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:45.729171  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:45.767556  173069 cri.go:89] found id: ""
	I0203 11:50:45.767589  173069 logs.go:282] 0 containers: []
	W0203 11:50:45.767598  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:45.767604  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:45.767655  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:45.805137  173069 cri.go:89] found id: ""
	I0203 11:50:45.805166  173069 logs.go:282] 0 containers: []
	W0203 11:50:45.805177  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:45.805184  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:45.805247  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:45.842203  173069 cri.go:89] found id: ""
	I0203 11:50:45.842229  173069 logs.go:282] 0 containers: []
	W0203 11:50:45.842237  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:45.842242  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:45.842293  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:45.878991  173069 cri.go:89] found id: ""
	I0203 11:50:45.879019  173069 logs.go:282] 0 containers: []
	W0203 11:50:45.879027  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:45.879033  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:45.879087  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:45.913350  173069 cri.go:89] found id: ""
	I0203 11:50:45.913382  173069 logs.go:282] 0 containers: []
	W0203 11:50:45.913390  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:45.913396  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:45.913456  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:45.946889  173069 cri.go:89] found id: ""
	I0203 11:50:45.946915  173069 logs.go:282] 0 containers: []
	W0203 11:50:45.946922  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:45.946928  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:45.946976  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:45.981388  173069 cri.go:89] found id: ""
	I0203 11:50:45.981414  173069 logs.go:282] 0 containers: []
	W0203 11:50:45.981424  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:45.981435  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:45.981447  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:46.030773  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:46.030811  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:46.044027  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:46.044058  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:46.120247  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:46.120274  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:46.120289  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:46.191925  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:46.191965  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:48.731857  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:48.748569  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:48.748659  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:48.799409  173069 cri.go:89] found id: ""
	I0203 11:50:48.799438  173069 logs.go:282] 0 containers: []
	W0203 11:50:48.799450  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:48.799462  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:48.799521  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:48.837870  173069 cri.go:89] found id: ""
	I0203 11:50:48.837902  173069 logs.go:282] 0 containers: []
	W0203 11:50:48.837916  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:48.837923  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:48.837978  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:48.876148  173069 cri.go:89] found id: ""
	I0203 11:50:48.876180  173069 logs.go:282] 0 containers: []
	W0203 11:50:48.876191  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:48.876200  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:48.876250  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:48.913278  173069 cri.go:89] found id: ""
	I0203 11:50:48.913306  173069 logs.go:282] 0 containers: []
	W0203 11:50:48.913317  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:48.913326  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:48.913375  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:48.948653  173069 cri.go:89] found id: ""
	I0203 11:50:48.948680  173069 logs.go:282] 0 containers: []
	W0203 11:50:48.948690  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:48.948699  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:48.948768  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:48.982720  173069 cri.go:89] found id: ""
	I0203 11:50:48.982748  173069 logs.go:282] 0 containers: []
	W0203 11:50:48.982758  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:48.982770  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:48.982820  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:49.025876  173069 cri.go:89] found id: ""
	I0203 11:50:49.025904  173069 logs.go:282] 0 containers: []
	W0203 11:50:49.025915  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:49.025923  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:49.025973  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:49.065428  173069 cri.go:89] found id: ""
	I0203 11:50:49.065453  173069 logs.go:282] 0 containers: []
	W0203 11:50:49.065464  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:49.065492  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:49.065508  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:49.120331  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:49.120365  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:49.134333  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:49.134361  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:49.203860  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:49.203885  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:49.203901  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:49.284732  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:49.284770  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:51.825425  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:51.840786  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:51.840844  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:51.876669  173069 cri.go:89] found id: ""
	I0203 11:50:51.876702  173069 logs.go:282] 0 containers: []
	W0203 11:50:51.876713  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:51.876722  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:51.876794  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:51.909687  173069 cri.go:89] found id: ""
	I0203 11:50:51.909724  173069 logs.go:282] 0 containers: []
	W0203 11:50:51.909737  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:51.909748  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:51.909799  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:51.948000  173069 cri.go:89] found id: ""
	I0203 11:50:51.948032  173069 logs.go:282] 0 containers: []
	W0203 11:50:51.948044  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:51.948052  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:51.948110  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:51.991148  173069 cri.go:89] found id: ""
	I0203 11:50:51.991174  173069 logs.go:282] 0 containers: []
	W0203 11:50:51.991183  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:51.991188  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:51.991235  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:52.026139  173069 cri.go:89] found id: ""
	I0203 11:50:52.026172  173069 logs.go:282] 0 containers: []
	W0203 11:50:52.026184  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:52.026193  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:52.026255  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:52.059704  173069 cri.go:89] found id: ""
	I0203 11:50:52.059737  173069 logs.go:282] 0 containers: []
	W0203 11:50:52.059749  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:52.059758  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:52.059821  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:52.093550  173069 cri.go:89] found id: ""
	I0203 11:50:52.093584  173069 logs.go:282] 0 containers: []
	W0203 11:50:52.093605  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:52.093613  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:52.093682  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:52.131255  173069 cri.go:89] found id: ""
	I0203 11:50:52.131287  173069 logs.go:282] 0 containers: []
	W0203 11:50:52.131299  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:52.131312  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:52.131326  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:52.185059  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:52.185109  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:52.197991  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:52.198045  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:52.267574  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:52.267597  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:52.267610  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:52.342792  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:52.342833  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:54.883229  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:54.900714  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:54.900796  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:54.949360  173069 cri.go:89] found id: ""
	I0203 11:50:54.949392  173069 logs.go:282] 0 containers: []
	W0203 11:50:54.949404  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:54.949416  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:54.949484  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:54.991266  173069 cri.go:89] found id: ""
	I0203 11:50:54.991319  173069 logs.go:282] 0 containers: []
	W0203 11:50:54.991330  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:54.991338  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:54.991434  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:55.035355  173069 cri.go:89] found id: ""
	I0203 11:50:55.035383  173069 logs.go:282] 0 containers: []
	W0203 11:50:55.035394  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:55.035402  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:55.035474  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:55.077307  173069 cri.go:89] found id: ""
	I0203 11:50:55.077333  173069 logs.go:282] 0 containers: []
	W0203 11:50:55.077344  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:55.077352  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:55.077403  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:55.114354  173069 cri.go:89] found id: ""
	I0203 11:50:55.114379  173069 logs.go:282] 0 containers: []
	W0203 11:50:55.114387  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:55.114393  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:55.114442  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:55.152718  173069 cri.go:89] found id: ""
	I0203 11:50:55.152750  173069 logs.go:282] 0 containers: []
	W0203 11:50:55.152762  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:55.152770  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:55.152832  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:55.196117  173069 cri.go:89] found id: ""
	I0203 11:50:55.196146  173069 logs.go:282] 0 containers: []
	W0203 11:50:55.196157  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:55.196165  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:55.196217  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:55.236833  173069 cri.go:89] found id: ""
	I0203 11:50:55.236863  173069 logs.go:282] 0 containers: []
	W0203 11:50:55.236874  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:55.236887  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:55.236902  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:55.318055  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:55.318156  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:55.335665  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:55.335700  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:55.455538  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:55.455561  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:55.455575  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:55.540430  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:55.540470  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:50:58.091024  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:50:58.109688  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:50:58.109771  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:50:58.153354  173069 cri.go:89] found id: ""
	I0203 11:50:58.153385  173069 logs.go:282] 0 containers: []
	W0203 11:50:58.153395  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:50:58.153403  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:50:58.153464  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:50:58.203221  173069 cri.go:89] found id: ""
	I0203 11:50:58.203256  173069 logs.go:282] 0 containers: []
	W0203 11:50:58.203268  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:50:58.203276  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:50:58.203342  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:50:58.247147  173069 cri.go:89] found id: ""
	I0203 11:50:58.247184  173069 logs.go:282] 0 containers: []
	W0203 11:50:58.247196  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:50:58.247204  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:50:58.247267  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:50:58.294930  173069 cri.go:89] found id: ""
	I0203 11:50:58.294965  173069 logs.go:282] 0 containers: []
	W0203 11:50:58.294978  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:50:58.294987  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:50:58.295045  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:50:58.339295  173069 cri.go:89] found id: ""
	I0203 11:50:58.339335  173069 logs.go:282] 0 containers: []
	W0203 11:50:58.339346  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:50:58.339354  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:50:58.339427  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:50:58.385247  173069 cri.go:89] found id: ""
	I0203 11:50:58.385280  173069 logs.go:282] 0 containers: []
	W0203 11:50:58.385291  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:50:58.385300  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:50:58.385366  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:50:58.433084  173069 cri.go:89] found id: ""
	I0203 11:50:58.433116  173069 logs.go:282] 0 containers: []
	W0203 11:50:58.433127  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:50:58.433136  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:50:58.433197  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:50:58.478197  173069 cri.go:89] found id: ""
	I0203 11:50:58.478230  173069 logs.go:282] 0 containers: []
	W0203 11:50:58.478240  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:50:58.478252  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:50:58.478268  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:50:58.554408  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:50:58.554451  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:50:58.572798  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:50:58.572833  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:50:58.683547  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:50:58.683574  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:50:58.683588  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:50:58.799566  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:50:58.799607  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:01.350100  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:01.367642  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:01.367728  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:01.409813  173069 cri.go:89] found id: ""
	I0203 11:51:01.409847  173069 logs.go:282] 0 containers: []
	W0203 11:51:01.409860  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:01.409869  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:01.409940  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:01.467961  173069 cri.go:89] found id: ""
	I0203 11:51:01.467995  173069 logs.go:282] 0 containers: []
	W0203 11:51:01.468006  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:01.468015  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:01.468079  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:01.504705  173069 cri.go:89] found id: ""
	I0203 11:51:01.504748  173069 logs.go:282] 0 containers: []
	W0203 11:51:01.504763  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:01.504771  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:01.504845  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:01.543580  173069 cri.go:89] found id: ""
	I0203 11:51:01.543615  173069 logs.go:282] 0 containers: []
	W0203 11:51:01.543629  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:01.543637  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:01.543700  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:01.584219  173069 cri.go:89] found id: ""
	I0203 11:51:01.584258  173069 logs.go:282] 0 containers: []
	W0203 11:51:01.584269  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:01.584279  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:01.584344  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:01.628783  173069 cri.go:89] found id: ""
	I0203 11:51:01.628880  173069 logs.go:282] 0 containers: []
	W0203 11:51:01.628905  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:01.628919  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:01.628997  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:01.665692  173069 cri.go:89] found id: ""
	I0203 11:51:01.665722  173069 logs.go:282] 0 containers: []
	W0203 11:51:01.665733  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:01.665741  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:01.665808  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:01.705826  173069 cri.go:89] found id: ""
	I0203 11:51:01.705861  173069 logs.go:282] 0 containers: []
	W0203 11:51:01.705874  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:01.705888  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:01.705904  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:01.749018  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:01.749059  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:01.815891  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:01.815941  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:01.830571  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:01.830606  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:01.929875  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:01.929956  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:01.929978  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:04.515755  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:04.529095  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:04.529160  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:04.570211  173069 cri.go:89] found id: ""
	I0203 11:51:04.570234  173069 logs.go:282] 0 containers: []
	W0203 11:51:04.570243  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:04.570249  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:04.570310  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:04.613033  173069 cri.go:89] found id: ""
	I0203 11:51:04.613067  173069 logs.go:282] 0 containers: []
	W0203 11:51:04.613080  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:04.613089  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:04.613159  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:04.649214  173069 cri.go:89] found id: ""
	I0203 11:51:04.649245  173069 logs.go:282] 0 containers: []
	W0203 11:51:04.649253  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:04.649258  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:04.649313  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:04.683284  173069 cri.go:89] found id: ""
	I0203 11:51:04.683310  173069 logs.go:282] 0 containers: []
	W0203 11:51:04.683318  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:04.683324  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:04.683378  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:04.715793  173069 cri.go:89] found id: ""
	I0203 11:51:04.715820  173069 logs.go:282] 0 containers: []
	W0203 11:51:04.715828  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:04.715834  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:04.715890  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:04.748978  173069 cri.go:89] found id: ""
	I0203 11:51:04.749010  173069 logs.go:282] 0 containers: []
	W0203 11:51:04.749021  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:04.749029  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:04.749095  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:04.782689  173069 cri.go:89] found id: ""
	I0203 11:51:04.782718  173069 logs.go:282] 0 containers: []
	W0203 11:51:04.782726  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:04.782732  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:04.782786  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:04.814431  173069 cri.go:89] found id: ""
	I0203 11:51:04.814460  173069 logs.go:282] 0 containers: []
	W0203 11:51:04.814469  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:04.814479  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:04.814491  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:04.887743  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:04.887788  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:04.929791  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:04.929819  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:04.991236  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:04.991288  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:05.004384  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:05.004428  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:05.070951  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:07.571255  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:07.586053  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:07.586155  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:07.620297  173069 cri.go:89] found id: ""
	I0203 11:51:07.620332  173069 logs.go:282] 0 containers: []
	W0203 11:51:07.620344  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:07.620358  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:07.620439  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:07.653785  173069 cri.go:89] found id: ""
	I0203 11:51:07.653815  173069 logs.go:282] 0 containers: []
	W0203 11:51:07.653825  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:07.653833  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:07.653898  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:07.687910  173069 cri.go:89] found id: ""
	I0203 11:51:07.687935  173069 logs.go:282] 0 containers: []
	W0203 11:51:07.687943  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:07.687949  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:07.687996  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:07.721396  173069 cri.go:89] found id: ""
	I0203 11:51:07.721433  173069 logs.go:282] 0 containers: []
	W0203 11:51:07.721444  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:07.721453  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:07.721521  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:07.753966  173069 cri.go:89] found id: ""
	I0203 11:51:07.753992  173069 logs.go:282] 0 containers: []
	W0203 11:51:07.754019  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:07.754027  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:07.754086  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:07.790067  173069 cri.go:89] found id: ""
	I0203 11:51:07.790094  173069 logs.go:282] 0 containers: []
	W0203 11:51:07.790103  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:07.790112  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:07.790170  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:07.821617  173069 cri.go:89] found id: ""
	I0203 11:51:07.821642  173069 logs.go:282] 0 containers: []
	W0203 11:51:07.821650  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:07.821656  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:07.821713  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:07.860926  173069 cri.go:89] found id: ""
	I0203 11:51:07.860950  173069 logs.go:282] 0 containers: []
	W0203 11:51:07.860959  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:07.860972  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:07.860986  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:07.901593  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:07.901623  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:07.950040  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:07.950083  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:07.962876  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:07.962903  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:08.029586  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:08.029610  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:08.029628  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:10.640922  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:10.658561  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:10.658645  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:10.704604  173069 cri.go:89] found id: ""
	I0203 11:51:10.704639  173069 logs.go:282] 0 containers: []
	W0203 11:51:10.704651  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:10.704660  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:10.704728  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:10.738081  173069 cri.go:89] found id: ""
	I0203 11:51:10.738113  173069 logs.go:282] 0 containers: []
	W0203 11:51:10.738124  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:10.738133  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:10.738195  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:10.773067  173069 cri.go:89] found id: ""
	I0203 11:51:10.773100  173069 logs.go:282] 0 containers: []
	W0203 11:51:10.773111  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:10.773119  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:10.773192  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:10.813822  173069 cri.go:89] found id: ""
	I0203 11:51:10.813853  173069 logs.go:282] 0 containers: []
	W0203 11:51:10.813865  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:10.813874  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:10.813940  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:10.851534  173069 cri.go:89] found id: ""
	I0203 11:51:10.851565  173069 logs.go:282] 0 containers: []
	W0203 11:51:10.851577  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:10.851586  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:10.851649  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:10.887296  173069 cri.go:89] found id: ""
	I0203 11:51:10.887329  173069 logs.go:282] 0 containers: []
	W0203 11:51:10.887341  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:10.887367  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:10.887429  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:10.930350  173069 cri.go:89] found id: ""
	I0203 11:51:10.930381  173069 logs.go:282] 0 containers: []
	W0203 11:51:10.930392  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:10.930400  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:10.930471  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:10.967042  173069 cri.go:89] found id: ""
	I0203 11:51:10.967079  173069 logs.go:282] 0 containers: []
	W0203 11:51:10.967090  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:10.967104  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:10.967120  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:11.048620  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:11.048659  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:11.083966  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:11.083999  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:11.135488  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:11.135529  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:11.148773  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:11.148805  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:11.213853  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:13.714134  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:13.729000  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:13.729076  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:13.773419  173069 cri.go:89] found id: ""
	I0203 11:51:13.773454  173069 logs.go:282] 0 containers: []
	W0203 11:51:13.773465  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:13.773471  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:13.773520  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:13.813696  173069 cri.go:89] found id: ""
	I0203 11:51:13.813732  173069 logs.go:282] 0 containers: []
	W0203 11:51:13.813742  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:13.813750  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:13.813815  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:13.881662  173069 cri.go:89] found id: ""
	I0203 11:51:13.881691  173069 logs.go:282] 0 containers: []
	W0203 11:51:13.881699  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:13.881706  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:13.881761  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:13.955215  173069 cri.go:89] found id: ""
	I0203 11:51:13.955244  173069 logs.go:282] 0 containers: []
	W0203 11:51:13.955254  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:13.955263  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:13.955331  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:13.994473  173069 cri.go:89] found id: ""
	I0203 11:51:13.994507  173069 logs.go:282] 0 containers: []
	W0203 11:51:13.994518  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:13.994556  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:13.994646  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:14.035743  173069 cri.go:89] found id: ""
	I0203 11:51:14.035774  173069 logs.go:282] 0 containers: []
	W0203 11:51:14.035787  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:14.035796  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:14.035858  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:14.077569  173069 cri.go:89] found id: ""
	I0203 11:51:14.077608  173069 logs.go:282] 0 containers: []
	W0203 11:51:14.077619  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:14.077627  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:14.077686  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:14.116872  173069 cri.go:89] found id: ""
	I0203 11:51:14.116905  173069 logs.go:282] 0 containers: []
	W0203 11:51:14.116918  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:14.116930  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:14.116947  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:14.187026  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:14.187066  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:14.205149  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:14.205183  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:14.278993  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:14.279015  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:14.279029  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:14.365417  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:14.365452  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:16.901784  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:16.918580  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:16.918651  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:16.957801  173069 cri.go:89] found id: ""
	I0203 11:51:16.957903  173069 logs.go:282] 0 containers: []
	W0203 11:51:16.957931  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:16.957953  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:16.958053  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:17.004715  173069 cri.go:89] found id: ""
	I0203 11:51:17.004746  173069 logs.go:282] 0 containers: []
	W0203 11:51:17.004757  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:17.004765  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:17.004838  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:17.044514  173069 cri.go:89] found id: ""
	I0203 11:51:17.044545  173069 logs.go:282] 0 containers: []
	W0203 11:51:17.044556  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:17.044562  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:17.044621  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:17.084326  173069 cri.go:89] found id: ""
	I0203 11:51:17.084370  173069 logs.go:282] 0 containers: []
	W0203 11:51:17.084384  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:17.084393  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:17.084459  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:17.130344  173069 cri.go:89] found id: ""
	I0203 11:51:17.130389  173069 logs.go:282] 0 containers: []
	W0203 11:51:17.130401  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:17.130410  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:17.130480  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:17.180635  173069 cri.go:89] found id: ""
	I0203 11:51:17.180669  173069 logs.go:282] 0 containers: []
	W0203 11:51:17.180682  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:17.180691  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:17.180756  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:17.230635  173069 cri.go:89] found id: ""
	I0203 11:51:17.230665  173069 logs.go:282] 0 containers: []
	W0203 11:51:17.230676  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:17.230684  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:17.230742  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:17.279868  173069 cri.go:89] found id: ""
	I0203 11:51:17.279903  173069 logs.go:282] 0 containers: []
	W0203 11:51:17.279914  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:17.279927  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:17.279946  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:17.360185  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:17.360232  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:17.377071  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:17.377111  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:17.462551  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:17.462580  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:17.462598  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:17.572553  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:17.572608  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:20.132346  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:20.149055  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:20.149143  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:20.200577  173069 cri.go:89] found id: ""
	I0203 11:51:20.200615  173069 logs.go:282] 0 containers: []
	W0203 11:51:20.200626  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:20.200635  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:20.200714  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:20.243059  173069 cri.go:89] found id: ""
	I0203 11:51:20.243086  173069 logs.go:282] 0 containers: []
	W0203 11:51:20.243097  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:20.243106  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:20.243170  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:20.276890  173069 cri.go:89] found id: ""
	I0203 11:51:20.276931  173069 logs.go:282] 0 containers: []
	W0203 11:51:20.276945  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:20.276957  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:20.277027  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:20.318821  173069 cri.go:89] found id: ""
	I0203 11:51:20.318848  173069 logs.go:282] 0 containers: []
	W0203 11:51:20.318859  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:20.318866  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:20.318932  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:20.365362  173069 cri.go:89] found id: ""
	I0203 11:51:20.365391  173069 logs.go:282] 0 containers: []
	W0203 11:51:20.365401  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:20.365409  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:20.365471  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:20.400468  173069 cri.go:89] found id: ""
	I0203 11:51:20.400497  173069 logs.go:282] 0 containers: []
	W0203 11:51:20.400507  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:20.400515  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:20.400581  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:20.438855  173069 cri.go:89] found id: ""
	I0203 11:51:20.438891  173069 logs.go:282] 0 containers: []
	W0203 11:51:20.438902  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:20.438911  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:20.438976  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:20.489638  173069 cri.go:89] found id: ""
	I0203 11:51:20.489669  173069 logs.go:282] 0 containers: []
	W0203 11:51:20.489680  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:20.489694  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:20.489710  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:20.543314  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:20.543354  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:20.558213  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:20.558243  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:20.655501  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:20.655529  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:20.655543  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:20.753393  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:20.753434  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:23.294161  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:23.308612  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:23.308694  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:23.344005  173069 cri.go:89] found id: ""
	I0203 11:51:23.344030  173069 logs.go:282] 0 containers: []
	W0203 11:51:23.344040  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:23.344048  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:23.344108  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:23.378892  173069 cri.go:89] found id: ""
	I0203 11:51:23.378920  173069 logs.go:282] 0 containers: []
	W0203 11:51:23.378930  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:23.378937  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:23.378994  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:23.412412  173069 cri.go:89] found id: ""
	I0203 11:51:23.412442  173069 logs.go:282] 0 containers: []
	W0203 11:51:23.412452  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:23.412461  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:23.412523  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:23.447447  173069 cri.go:89] found id: ""
	I0203 11:51:23.447480  173069 logs.go:282] 0 containers: []
	W0203 11:51:23.447492  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:23.447499  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:23.447551  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:23.481936  173069 cri.go:89] found id: ""
	I0203 11:51:23.481971  173069 logs.go:282] 0 containers: []
	W0203 11:51:23.481984  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:23.482005  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:23.482068  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:23.514232  173069 cri.go:89] found id: ""
	I0203 11:51:23.514258  173069 logs.go:282] 0 containers: []
	W0203 11:51:23.514266  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:23.514272  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:23.514345  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:23.552516  173069 cri.go:89] found id: ""
	I0203 11:51:23.552541  173069 logs.go:282] 0 containers: []
	W0203 11:51:23.552550  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:23.552556  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:23.552606  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:23.591497  173069 cri.go:89] found id: ""
	I0203 11:51:23.591527  173069 logs.go:282] 0 containers: []
	W0203 11:51:23.591538  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:23.591558  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:23.591573  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:23.682424  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:23.682466  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:23.731476  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:23.731510  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:23.795190  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:23.795226  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:23.813039  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:23.813080  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:23.896888  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:26.398024  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:26.412515  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:26.412586  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:26.448795  173069 cri.go:89] found id: ""
	I0203 11:51:26.448833  173069 logs.go:282] 0 containers: []
	W0203 11:51:26.448846  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:26.448854  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:26.448919  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:26.489605  173069 cri.go:89] found id: ""
	I0203 11:51:26.489638  173069 logs.go:282] 0 containers: []
	W0203 11:51:26.489650  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:26.489658  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:26.489721  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:26.533905  173069 cri.go:89] found id: ""
	I0203 11:51:26.533933  173069 logs.go:282] 0 containers: []
	W0203 11:51:26.533941  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:26.533956  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:26.534029  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:26.570038  173069 cri.go:89] found id: ""
	I0203 11:51:26.570069  173069 logs.go:282] 0 containers: []
	W0203 11:51:26.570080  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:26.570088  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:26.570164  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:26.605422  173069 cri.go:89] found id: ""
	I0203 11:51:26.605446  173069 logs.go:282] 0 containers: []
	W0203 11:51:26.605455  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:26.605460  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:26.605521  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:26.643384  173069 cri.go:89] found id: ""
	I0203 11:51:26.643422  173069 logs.go:282] 0 containers: []
	W0203 11:51:26.643433  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:26.643440  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:26.643562  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:26.679201  173069 cri.go:89] found id: ""
	I0203 11:51:26.679235  173069 logs.go:282] 0 containers: []
	W0203 11:51:26.679248  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:26.679257  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:26.679325  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:26.713047  173069 cri.go:89] found id: ""
	I0203 11:51:26.713084  173069 logs.go:282] 0 containers: []
	W0203 11:51:26.713095  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:26.713111  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:26.713134  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:26.794041  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:26.794087  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:26.837881  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:26.837921  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:26.887382  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:26.887426  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:26.902312  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:26.902351  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:26.986590  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:29.488295  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:29.502789  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:29.502871  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:29.542796  173069 cri.go:89] found id: ""
	I0203 11:51:29.542830  173069 logs.go:282] 0 containers: []
	W0203 11:51:29.542841  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:29.542850  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:29.542920  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:29.586282  173069 cri.go:89] found id: ""
	I0203 11:51:29.586325  173069 logs.go:282] 0 containers: []
	W0203 11:51:29.586336  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:29.586345  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:29.586409  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:29.627442  173069 cri.go:89] found id: ""
	I0203 11:51:29.627471  173069 logs.go:282] 0 containers: []
	W0203 11:51:29.627479  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:29.627488  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:29.627538  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:29.667109  173069 cri.go:89] found id: ""
	I0203 11:51:29.667144  173069 logs.go:282] 0 containers: []
	W0203 11:51:29.667152  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:29.667159  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:29.667211  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:29.700674  173069 cri.go:89] found id: ""
	I0203 11:51:29.700701  173069 logs.go:282] 0 containers: []
	W0203 11:51:29.700708  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:29.700714  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:29.700773  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:29.740304  173069 cri.go:89] found id: ""
	I0203 11:51:29.740339  173069 logs.go:282] 0 containers: []
	W0203 11:51:29.740351  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:29.740360  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:29.740425  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:29.773982  173069 cri.go:89] found id: ""
	I0203 11:51:29.774025  173069 logs.go:282] 0 containers: []
	W0203 11:51:29.774037  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:29.774045  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:29.774100  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:29.810991  173069 cri.go:89] found id: ""
	I0203 11:51:29.811023  173069 logs.go:282] 0 containers: []
	W0203 11:51:29.811034  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:29.811047  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:29.811065  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:29.888928  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:29.888966  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:29.926343  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:29.926388  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:29.972381  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:29.972418  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:29.985913  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:29.985951  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:30.053668  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:32.554359  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:32.571170  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:32.571250  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:32.621140  173069 cri.go:89] found id: ""
	I0203 11:51:32.621173  173069 logs.go:282] 0 containers: []
	W0203 11:51:32.621184  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:32.621193  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:32.621260  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:32.664002  173069 cri.go:89] found id: ""
	I0203 11:51:32.664030  173069 logs.go:282] 0 containers: []
	W0203 11:51:32.664041  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:32.664049  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:32.664111  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:32.704583  173069 cri.go:89] found id: ""
	I0203 11:51:32.704621  173069 logs.go:282] 0 containers: []
	W0203 11:51:32.704634  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:32.704644  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:32.704708  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:32.742260  173069 cri.go:89] found id: ""
	I0203 11:51:32.742300  173069 logs.go:282] 0 containers: []
	W0203 11:51:32.742311  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:32.742320  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:32.742386  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:32.782278  173069 cri.go:89] found id: ""
	I0203 11:51:32.782306  173069 logs.go:282] 0 containers: []
	W0203 11:51:32.782316  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:32.782324  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:32.782390  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:32.820629  173069 cri.go:89] found id: ""
	I0203 11:51:32.820658  173069 logs.go:282] 0 containers: []
	W0203 11:51:32.820669  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:32.820677  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:32.820740  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:32.865933  173069 cri.go:89] found id: ""
	I0203 11:51:32.865963  173069 logs.go:282] 0 containers: []
	W0203 11:51:32.865973  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:32.865981  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:32.866071  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:32.912442  173069 cri.go:89] found id: ""
	I0203 11:51:32.912469  173069 logs.go:282] 0 containers: []
	W0203 11:51:32.912487  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:32.912499  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:32.912514  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:32.976703  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:32.976755  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:32.997441  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:32.997494  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:33.112384  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:33.112407  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:33.112422  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:33.203102  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:33.203139  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:35.745190  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:35.758140  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:35.758208  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:35.791770  173069 cri.go:89] found id: ""
	I0203 11:51:35.791803  173069 logs.go:282] 0 containers: []
	W0203 11:51:35.791821  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:35.791830  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:35.791886  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:35.830932  173069 cri.go:89] found id: ""
	I0203 11:51:35.830962  173069 logs.go:282] 0 containers: []
	W0203 11:51:35.830973  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:35.830982  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:35.831047  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:35.864060  173069 cri.go:89] found id: ""
	I0203 11:51:35.864088  173069 logs.go:282] 0 containers: []
	W0203 11:51:35.864095  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:35.864102  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:35.864162  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:35.899314  173069 cri.go:89] found id: ""
	I0203 11:51:35.899348  173069 logs.go:282] 0 containers: []
	W0203 11:51:35.899356  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:35.899362  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:35.899423  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:35.934855  173069 cri.go:89] found id: ""
	I0203 11:51:35.934882  173069 logs.go:282] 0 containers: []
	W0203 11:51:35.934892  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:35.934899  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:35.934967  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:35.967599  173069 cri.go:89] found id: ""
	I0203 11:51:35.967633  173069 logs.go:282] 0 containers: []
	W0203 11:51:35.967641  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:35.967647  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:35.967704  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:36.004667  173069 cri.go:89] found id: ""
	I0203 11:51:36.004696  173069 logs.go:282] 0 containers: []
	W0203 11:51:36.004703  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:36.004709  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:36.004772  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:36.040385  173069 cri.go:89] found id: ""
	I0203 11:51:36.040425  173069 logs.go:282] 0 containers: []
	W0203 11:51:36.040438  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:36.040452  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:36.040469  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:36.090428  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:36.090461  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:36.103299  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:36.103323  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:36.176461  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:36.176488  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:36.176501  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:36.255930  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:36.256050  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:38.798158  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:38.814502  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:38.814589  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:38.854567  173069 cri.go:89] found id: ""
	I0203 11:51:38.854602  173069 logs.go:282] 0 containers: []
	W0203 11:51:38.854614  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:38.854622  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:38.854680  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:38.894543  173069 cri.go:89] found id: ""
	I0203 11:51:38.894576  173069 logs.go:282] 0 containers: []
	W0203 11:51:38.894587  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:38.894596  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:38.894658  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:38.932561  173069 cri.go:89] found id: ""
	I0203 11:51:38.932595  173069 logs.go:282] 0 containers: []
	W0203 11:51:38.932607  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:38.932616  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:38.932682  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:38.968539  173069 cri.go:89] found id: ""
	I0203 11:51:38.968568  173069 logs.go:282] 0 containers: []
	W0203 11:51:38.968580  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:38.968592  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:38.968651  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:39.005524  173069 cri.go:89] found id: ""
	I0203 11:51:39.005545  173069 logs.go:282] 0 containers: []
	W0203 11:51:39.005552  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:39.005557  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:39.005601  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:39.041468  173069 cri.go:89] found id: ""
	I0203 11:51:39.041495  173069 logs.go:282] 0 containers: []
	W0203 11:51:39.041505  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:39.041514  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:39.041578  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:39.077332  173069 cri.go:89] found id: ""
	I0203 11:51:39.077361  173069 logs.go:282] 0 containers: []
	W0203 11:51:39.077374  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:39.077382  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:39.077449  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:39.110441  173069 cri.go:89] found id: ""
	I0203 11:51:39.110470  173069 logs.go:282] 0 containers: []
	W0203 11:51:39.110481  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:39.110493  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:39.110509  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:39.161314  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:39.161357  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:39.177607  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:39.177637  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:39.257738  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:39.257779  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:39.257793  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:39.335618  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:39.335662  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:41.879251  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:41.892114  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:41.892188  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:41.923719  173069 cri.go:89] found id: ""
	I0203 11:51:41.923748  173069 logs.go:282] 0 containers: []
	W0203 11:51:41.923756  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:41.923763  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:41.923823  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:41.961918  173069 cri.go:89] found id: ""
	I0203 11:51:41.961946  173069 logs.go:282] 0 containers: []
	W0203 11:51:41.961954  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:41.961960  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:41.962031  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:41.997276  173069 cri.go:89] found id: ""
	I0203 11:51:41.997306  173069 logs.go:282] 0 containers: []
	W0203 11:51:41.997314  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:41.997320  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:41.997384  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:42.027979  173069 cri.go:89] found id: ""
	I0203 11:51:42.028004  173069 logs.go:282] 0 containers: []
	W0203 11:51:42.028012  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:42.028020  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:42.028085  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:42.060977  173069 cri.go:89] found id: ""
	I0203 11:51:42.061004  173069 logs.go:282] 0 containers: []
	W0203 11:51:42.061014  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:42.061020  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:42.061073  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:42.093853  173069 cri.go:89] found id: ""
	I0203 11:51:42.093888  173069 logs.go:282] 0 containers: []
	W0203 11:51:42.093900  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:42.093909  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:42.093976  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:42.126541  173069 cri.go:89] found id: ""
	I0203 11:51:42.126573  173069 logs.go:282] 0 containers: []
	W0203 11:51:42.126585  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:42.126593  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:42.126662  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:42.160467  173069 cri.go:89] found id: ""
	I0203 11:51:42.160494  173069 logs.go:282] 0 containers: []
	W0203 11:51:42.160505  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:42.160520  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:42.160536  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:42.211829  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:42.211871  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:42.224858  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:42.224890  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:42.291521  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:42.291545  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:42.291564  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:42.370394  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:42.370432  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:44.930053  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:44.942588  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:44.942653  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:44.975071  173069 cri.go:89] found id: ""
	I0203 11:51:44.975103  173069 logs.go:282] 0 containers: []
	W0203 11:51:44.975112  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:44.975118  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:44.975170  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:45.007721  173069 cri.go:89] found id: ""
	I0203 11:51:45.007753  173069 logs.go:282] 0 containers: []
	W0203 11:51:45.007762  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:45.007768  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:45.007816  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:45.039088  173069 cri.go:89] found id: ""
	I0203 11:51:45.039122  173069 logs.go:282] 0 containers: []
	W0203 11:51:45.039134  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:45.039141  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:45.039197  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:45.070289  173069 cri.go:89] found id: ""
	I0203 11:51:45.070314  173069 logs.go:282] 0 containers: []
	W0203 11:51:45.070323  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:45.070329  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:45.070384  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:45.101188  173069 cri.go:89] found id: ""
	I0203 11:51:45.101225  173069 logs.go:282] 0 containers: []
	W0203 11:51:45.101237  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:45.101245  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:45.101313  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:45.133486  173069 cri.go:89] found id: ""
	I0203 11:51:45.133522  173069 logs.go:282] 0 containers: []
	W0203 11:51:45.133534  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:45.133543  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:45.133606  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:45.166222  173069 cri.go:89] found id: ""
	I0203 11:51:45.166251  173069 logs.go:282] 0 containers: []
	W0203 11:51:45.166261  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:45.166269  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:45.166331  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:45.197863  173069 cri.go:89] found id: ""
	I0203 11:51:45.197894  173069 logs.go:282] 0 containers: []
	W0203 11:51:45.197903  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:45.197912  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:45.197924  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:45.246497  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:45.246544  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:45.259763  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:45.259792  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:45.332133  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:45.332161  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:45.332174  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:45.416748  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:45.416789  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:47.958652  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:47.972404  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:47.972476  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:48.003924  173069 cri.go:89] found id: ""
	I0203 11:51:48.003952  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.003963  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:48.003972  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:48.004036  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:48.035462  173069 cri.go:89] found id: ""
	I0203 11:51:48.035495  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.035507  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:48.035516  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:48.035571  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:48.066226  173069 cri.go:89] found id: ""
	I0203 11:51:48.066255  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.066266  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:48.066274  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:48.066340  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:48.097119  173069 cri.go:89] found id: ""
	I0203 11:51:48.097150  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.097162  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:48.097170  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:48.097234  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:48.129010  173069 cri.go:89] found id: ""
	I0203 11:51:48.129049  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.129061  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:48.129069  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:48.129128  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:48.172322  173069 cri.go:89] found id: ""
	I0203 11:51:48.172355  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.172363  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:48.172371  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:48.172442  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:48.203549  173069 cri.go:89] found id: ""
	I0203 11:51:48.203579  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.203587  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:48.203594  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:48.203645  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:48.234281  173069 cri.go:89] found id: ""
	I0203 11:51:48.234306  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.234317  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:48.234330  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:48.234347  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:48.246492  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:48.246517  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:48.310115  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:48.310151  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:48.310168  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:48.386999  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:48.387026  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:48.423031  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:48.423061  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:50.971751  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:50.984547  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:50.984616  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:51.021321  173069 cri.go:89] found id: ""
	I0203 11:51:51.021357  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.021367  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:51.021376  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:51.021435  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:51.052316  173069 cri.go:89] found id: ""
	I0203 11:51:51.052346  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.052365  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:51.052374  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:51.052439  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:51.095230  173069 cri.go:89] found id: ""
	I0203 11:51:51.095260  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.095273  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:51.095281  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:51.095344  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:51.127525  173069 cri.go:89] found id: ""
	I0203 11:51:51.127555  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.127564  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:51.127571  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:51.127642  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:51.174651  173069 cri.go:89] found id: ""
	I0203 11:51:51.174683  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.174694  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:51.174700  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:51.174761  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:51.208470  173069 cri.go:89] found id: ""
	I0203 11:51:51.208498  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.208510  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:51.208518  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:51.208585  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:51.242996  173069 cri.go:89] found id: ""
	I0203 11:51:51.243022  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.243031  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:51.243042  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:51.243103  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:51.277561  173069 cri.go:89] found id: ""
	I0203 11:51:51.277584  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.277592  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:51.277602  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:51.277613  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:51.316285  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:51.316313  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:51.378564  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:51.378598  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:51.391948  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:51.391974  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:51.459101  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:51.459127  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:51.459140  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:54.041961  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:54.057395  173069 kubeadm.go:597] duration metric: took 4m4.242570395s to restartPrimaryControlPlane
	W0203 11:51:54.057514  173069 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0203 11:51:54.057545  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0203 11:51:54.515481  173069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:51:54.529356  173069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:51:54.538455  173069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:51:54.547140  173069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:51:54.547165  173069 kubeadm.go:157] found existing configuration files:
	
	I0203 11:51:54.547215  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:51:54.555393  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:51:54.555454  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:51:54.564221  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:51:54.572805  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:51:54.572854  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:51:54.581348  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:51:54.589519  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:51:54.589584  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:51:54.598204  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:51:54.606299  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:51:54.606354  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:51:54.614879  173069 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:51:54.681507  173069 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0203 11:51:54.681579  173069 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:51:54.833975  173069 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:51:54.834115  173069 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:51:54.834236  173069 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 11:51:55.015734  173069 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:51:55.017800  173069 out.go:235]   - Generating certificates and keys ...
	I0203 11:51:55.017908  173069 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:51:55.018029  173069 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:51:55.018147  173069 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 11:51:55.018236  173069 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0203 11:51:55.018336  173069 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 11:51:55.018420  173069 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0203 11:51:55.018509  173069 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0203 11:51:55.018605  173069 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0203 11:51:55.018770  173069 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 11:51:55.019144  173069 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 11:51:55.019209  173069 kubeadm.go:310] [certs] Using the existing "sa" key
	I0203 11:51:55.019307  173069 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:51:55.202633  173069 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:51:55.377699  173069 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:51:55.476193  173069 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:51:55.684690  173069 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:51:55.706297  173069 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:51:55.707243  173069 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:51:55.707310  173069 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:51:55.857226  173069 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:51:55.859128  173069 out.go:235]   - Booting up control plane ...
	I0203 11:51:55.859247  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:51:55.863942  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:51:55.865838  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:51:55.867142  173069 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:51:55.871067  173069 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 11:52:35.872135  173069 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0203 11:52:35.872966  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:52:35.873172  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:52:40.873720  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:52:40.873968  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:52:50.874520  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:52:50.874761  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:53:10.875767  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:53:10.876032  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:53:50.878348  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:53:50.878572  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:53:50.878585  173069 kubeadm.go:310] 
	I0203 11:53:50.878677  173069 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0203 11:53:50.878746  173069 kubeadm.go:310] 		timed out waiting for the condition
	I0203 11:53:50.878756  173069 kubeadm.go:310] 
	I0203 11:53:50.878805  173069 kubeadm.go:310] 	This error is likely caused by:
	I0203 11:53:50.878848  173069 kubeadm.go:310] 		- The kubelet is not running
	I0203 11:53:50.878993  173069 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 11:53:50.879004  173069 kubeadm.go:310] 
	I0203 11:53:50.879145  173069 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 11:53:50.879192  173069 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0203 11:53:50.879235  173069 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0203 11:53:50.879245  173069 kubeadm.go:310] 
	I0203 11:53:50.879390  173069 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 11:53:50.879507  173069 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 11:53:50.879517  173069 kubeadm.go:310] 
	I0203 11:53:50.879660  173069 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0203 11:53:50.879782  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0203 11:53:50.879904  173069 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0203 11:53:50.880019  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0203 11:53:50.880033  173069 kubeadm.go:310] 
	I0203 11:53:50.880322  173069 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:53:50.880397  173069 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 11:53:50.880465  173069 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0203 11:53:50.880620  173069 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0203 11:53:50.880666  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0203 11:53:56.208593  173069 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.327900088s)
	I0203 11:53:56.208687  173069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:53:56.222067  173069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:53:56.231274  173069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:53:56.231296  173069 kubeadm.go:157] found existing configuration files:
	
	I0203 11:53:56.231344  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:53:56.240522  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:53:56.240587  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:53:56.249755  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:53:56.258586  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:53:56.258645  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:53:56.267974  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:53:56.276669  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:53:56.276720  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:53:56.285661  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:53:56.294673  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:53:56.294734  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:53:56.303819  173069 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:53:56.510714  173069 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:55:52.911681  173069 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 11:55:52.911777  173069 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0203 11:55:52.913157  173069 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0203 11:55:52.913224  173069 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:55:52.913299  173069 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:55:52.913463  173069 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:55:52.913598  173069 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 11:55:52.913672  173069 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:55:52.915764  173069 out.go:235]   - Generating certificates and keys ...
	I0203 11:55:52.915857  173069 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:55:52.915908  173069 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:55:52.915975  173069 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 11:55:52.916023  173069 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0203 11:55:52.916077  173069 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 11:55:52.916150  173069 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0203 11:55:52.916233  173069 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0203 11:55:52.916309  173069 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0203 11:55:52.916424  173069 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 11:55:52.916508  173069 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 11:55:52.916542  173069 kubeadm.go:310] [certs] Using the existing "sa" key
	I0203 11:55:52.916589  173069 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:55:52.916635  173069 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:55:52.916682  173069 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:55:52.916747  173069 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:55:52.916798  173069 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:55:52.916898  173069 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:55:52.916991  173069 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:55:52.917027  173069 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:55:52.917082  173069 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:55:52.918947  173069 out.go:235]   - Booting up control plane ...
	I0203 11:55:52.919052  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:55:52.919135  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:55:52.919213  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:55:52.919298  173069 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:55:52.919440  173069 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 11:55:52.919509  173069 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0203 11:55:52.919578  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.919738  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.919799  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.919950  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920007  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.920158  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920230  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.920452  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920558  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.920806  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920815  173069 kubeadm.go:310] 
	I0203 11:55:52.920849  173069 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0203 11:55:52.920884  173069 kubeadm.go:310] 		timed out waiting for the condition
	I0203 11:55:52.920891  173069 kubeadm.go:310] 
	I0203 11:55:52.920924  173069 kubeadm.go:310] 	This error is likely caused by:
	I0203 11:55:52.920954  173069 kubeadm.go:310] 		- The kubelet is not running
	I0203 11:55:52.921051  173069 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 11:55:52.921066  173069 kubeadm.go:310] 
	I0203 11:55:52.921160  173069 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 11:55:52.921199  173069 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0203 11:55:52.921228  173069 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0203 11:55:52.921235  173069 kubeadm.go:310] 
	I0203 11:55:52.921355  173069 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 11:55:52.921465  173069 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 11:55:52.921476  173069 kubeadm.go:310] 
	I0203 11:55:52.921595  173069 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0203 11:55:52.921666  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0203 11:55:52.921725  173069 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0203 11:55:52.921781  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0203 11:55:52.921820  173069 kubeadm.go:310] 
	I0203 11:55:52.921866  173069 kubeadm.go:394] duration metric: took 8m3.159723737s to StartCluster
	I0203 11:55:52.921917  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:55:52.921979  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:55:52.965327  173069 cri.go:89] found id: ""
	I0203 11:55:52.965360  173069 logs.go:282] 0 containers: []
	W0203 11:55:52.965370  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:55:52.965377  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:55:52.965429  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:55:52.999197  173069 cri.go:89] found id: ""
	I0203 11:55:52.999224  173069 logs.go:282] 0 containers: []
	W0203 11:55:52.999233  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:55:52.999239  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:55:52.999290  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:55:53.033201  173069 cri.go:89] found id: ""
	I0203 11:55:53.033231  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.033239  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:55:53.033245  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:55:53.033298  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:55:53.069227  173069 cri.go:89] found id: ""
	I0203 11:55:53.069262  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.069274  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:55:53.069282  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:55:53.069361  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:55:53.102418  173069 cri.go:89] found id: ""
	I0203 11:55:53.102448  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.102460  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:55:53.102467  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:55:53.102595  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:55:53.134815  173069 cri.go:89] found id: ""
	I0203 11:55:53.134846  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.134859  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:55:53.134865  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:55:53.134916  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:55:53.184017  173069 cri.go:89] found id: ""
	I0203 11:55:53.184063  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.184075  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:55:53.184086  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:55:53.184180  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:55:53.218584  173069 cri.go:89] found id: ""
	I0203 11:55:53.218620  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.218630  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:55:53.218642  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:55:53.218656  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:55:53.267577  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:55:53.267624  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:55:53.280882  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:55:53.280915  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:55:53.352344  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:55:53.352371  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:55:53.352385  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:55:53.451451  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:55:53.451495  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0203 11:55:53.488076  173069 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0203 11:55:53.488133  173069 out.go:270] * 
	* 
	W0203 11:55:53.488199  173069 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 11:55:53.488213  173069 out.go:270] * 
	* 
	W0203 11:55:53.489069  173069 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 11:55:53.492291  173069 out.go:201] 
	W0203 11:55:53.493552  173069 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 11:55:53.493606  173069 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0203 11:55:53.493647  173069 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0203 11:55:53.494859  173069 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-517711 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 2 (260.383299ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-517711 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-691067 image list                          | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-691067                                  | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-691067                                  | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-691067                                  | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	| delete  | -p embed-certs-691067                                  | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	| start   | -p newest-cni-586043 --memory=2200 --alsologtostderr   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | no-preload-085638 image list                           | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-085638                                   | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-085638                                   | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-085638                                   | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	| delete  | -p no-preload-085638                                   | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	| image   | default-k8s-diff-port-138645                           | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | default-k8s-diff-port-138645                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | default-k8s-diff-port-138645                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | default-k8s-diff-port-138645                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | default-k8s-diff-port-138645                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-586043             | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-586043                  | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-586043 --memory=2200 --alsologtostderr   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-586043 image list                           | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	| delete  | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 11:51:49
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 11:51:49.897155  175844 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:51:49.897275  175844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:51:49.897287  175844 out.go:358] Setting ErrFile to fd 2...
	I0203 11:51:49.897291  175844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:51:49.897486  175844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:51:49.898057  175844 out.go:352] Setting JSON to false
	I0203 11:51:49.898943  175844 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9252,"bootTime":1738574258,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 11:51:49.899051  175844 start.go:139] virtualization: kvm guest
	I0203 11:51:49.901414  175844 out.go:177] * [newest-cni-586043] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 11:51:49.903016  175844 notify.go:220] Checking for updates...
	I0203 11:51:49.903024  175844 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:51:49.904418  175844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:51:49.905475  175844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:51:49.906695  175844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:51:49.907794  175844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 11:51:49.909017  175844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:51:49.910440  175844 config.go:182] Loaded profile config "newest-cni-586043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:51:49.910830  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:51:49.910906  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:51:49.925489  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
	I0203 11:51:49.925936  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:51:49.926599  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:51:49.926617  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:51:49.926982  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:51:49.927181  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:51:49.927443  175844 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:51:49.927733  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:51:49.927780  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:51:49.942754  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I0203 11:51:49.943278  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:51:49.943789  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:51:49.943810  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:51:49.944116  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:51:49.944333  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:51:49.982359  175844 out.go:177] * Using the kvm2 driver based on existing profile
	I0203 11:51:49.983564  175844 start.go:297] selected driver: kvm2
	I0203 11:51:49.983579  175844 start.go:901] validating driver "kvm2" against &{Name:newest-cni-586043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:51:49.983680  175844 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:51:49.984357  175844 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:51:49.984460  175844 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 11:51:49.999536  175844 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 11:51:49.999973  175844 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0203 11:51:50.000007  175844 cni.go:84] Creating CNI manager for ""
	I0203 11:51:50.000057  175844 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:51:50.000113  175844 start.go:340] cluster config:
	{Name:newest-cni-586043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:51:50.000234  175844 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:51:50.001824  175844 out.go:177] * Starting "newest-cni-586043" primary control-plane node in "newest-cni-586043" cluster
	I0203 11:51:50.003075  175844 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:51:50.003128  175844 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0203 11:51:50.003141  175844 cache.go:56] Caching tarball of preloaded images
	I0203 11:51:50.003229  175844 preload.go:172] Found /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0203 11:51:50.003240  175844 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0203 11:51:50.003363  175844 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/config.json ...
	I0203 11:51:50.003582  175844 start.go:360] acquireMachinesLock for newest-cni-586043: {Name:mk4d774b88f87fe0539ca3e30dd98aae8a4d5437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:51:50.003637  175844 start.go:364] duration metric: took 33.224µs to acquireMachinesLock for "newest-cni-586043"
	I0203 11:51:50.003664  175844 start.go:96] Skipping create...Using existing machine configuration
	I0203 11:51:50.003675  175844 fix.go:54] fixHost starting: 
	I0203 11:51:50.003993  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:51:50.004037  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:51:50.018719  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0203 11:51:50.020226  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:51:50.020848  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:51:50.020873  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:51:50.021243  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:51:50.021461  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:51:50.021601  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:51:50.023310  175844 fix.go:112] recreateIfNeeded on newest-cni-586043: state=Stopped err=<nil>
	I0203 11:51:50.023355  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	W0203 11:51:50.023508  175844 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 11:51:50.025216  175844 out.go:177] * Restarting existing kvm2 VM for "newest-cni-586043" ...
	I0203 11:51:47.958652  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:47.972404  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:47.972476  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:48.003924  173069 cri.go:89] found id: ""
	I0203 11:51:48.003952  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.003963  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:48.003972  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:48.004036  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:48.035462  173069 cri.go:89] found id: ""
	I0203 11:51:48.035495  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.035507  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:48.035516  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:48.035571  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:48.066226  173069 cri.go:89] found id: ""
	I0203 11:51:48.066255  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.066266  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:48.066274  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:48.066340  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:48.097119  173069 cri.go:89] found id: ""
	I0203 11:51:48.097150  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.097162  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:48.097170  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:48.097234  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:48.129010  173069 cri.go:89] found id: ""
	I0203 11:51:48.129049  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.129061  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:48.129069  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:48.129128  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:48.172322  173069 cri.go:89] found id: ""
	I0203 11:51:48.172355  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.172363  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:48.172371  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:48.172442  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:48.203549  173069 cri.go:89] found id: ""
	I0203 11:51:48.203579  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.203587  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:48.203594  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:48.203645  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:48.234281  173069 cri.go:89] found id: ""
	I0203 11:51:48.234306  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.234317  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:48.234330  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:48.234347  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:48.246492  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:48.246517  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:48.310115  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:48.310151  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:48.310168  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:48.386999  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:48.387026  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:48.423031  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:48.423061  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:50.971751  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:50.984547  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:50.984616  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:51.021321  173069 cri.go:89] found id: ""
	I0203 11:51:51.021357  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.021367  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:51.021376  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:51.021435  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:51.052316  173069 cri.go:89] found id: ""
	I0203 11:51:51.052346  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.052365  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:51.052374  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:51.052439  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:51.095230  173069 cri.go:89] found id: ""
	I0203 11:51:51.095260  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.095273  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:51.095281  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:51.095344  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:50.026238  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Start
	I0203 11:51:50.026392  175844 main.go:141] libmachine: (newest-cni-586043) starting domain...
	I0203 11:51:50.026416  175844 main.go:141] libmachine: (newest-cni-586043) ensuring networks are active...
	I0203 11:51:50.027168  175844 main.go:141] libmachine: (newest-cni-586043) Ensuring network default is active
	I0203 11:51:50.027412  175844 main.go:141] libmachine: (newest-cni-586043) Ensuring network mk-newest-cni-586043 is active
	I0203 11:51:50.027811  175844 main.go:141] libmachine: (newest-cni-586043) getting domain XML...
	I0203 11:51:50.028591  175844 main.go:141] libmachine: (newest-cni-586043) creating domain...
	I0203 11:51:51.307305  175844 main.go:141] libmachine: (newest-cni-586043) waiting for IP...
	I0203 11:51:51.308386  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:51.308948  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:51.309071  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:51.308963  175880 retry.go:31] will retry after 231.852312ms: waiting for domain to come up
	I0203 11:51:51.542677  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:51.543280  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:51.543310  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:51.543240  175880 retry.go:31] will retry after 253.507055ms: waiting for domain to come up
	I0203 11:51:51.798941  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:51.799486  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:51.799509  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:51.799452  175880 retry.go:31] will retry after 481.304674ms: waiting for domain to come up
	I0203 11:51:52.282121  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:52.282587  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:52.282613  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:52.282573  175880 retry.go:31] will retry after 574.20795ms: waiting for domain to come up
	I0203 11:51:52.858249  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:52.858753  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:52.858797  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:52.858730  175880 retry.go:31] will retry after 479.45061ms: waiting for domain to come up
	I0203 11:51:53.339378  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:53.339968  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:53.340048  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:53.339937  175880 retry.go:31] will retry after 611.732312ms: waiting for domain to come up
	I0203 11:51:53.953770  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:53.954271  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:53.954309  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:53.954225  175880 retry.go:31] will retry after 1.020753974s: waiting for domain to come up
	I0203 11:51:51.127525  173069 cri.go:89] found id: ""
	I0203 11:51:51.127555  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.127564  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:51.127571  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:51.127642  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:51.174651  173069 cri.go:89] found id: ""
	I0203 11:51:51.174683  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.174694  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:51.174700  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:51.174761  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:51.208470  173069 cri.go:89] found id: ""
	I0203 11:51:51.208498  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.208510  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:51.208518  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:51.208585  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:51.242996  173069 cri.go:89] found id: ""
	I0203 11:51:51.243022  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.243031  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:51.243042  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:51.243103  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:51.277561  173069 cri.go:89] found id: ""
	I0203 11:51:51.277584  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.277592  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:51.277602  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:51.277613  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:51.316285  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:51.316313  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:51.378564  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:51.378598  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:51.391948  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:51.391974  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:51.459101  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:51.459127  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:51.459140  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:54.041961  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:54.057395  173069 kubeadm.go:597] duration metric: took 4m4.242570395s to restartPrimaryControlPlane
	W0203 11:51:54.057514  173069 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0203 11:51:54.057545  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0203 11:51:54.515481  173069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:51:54.529356  173069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:51:54.538455  173069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:51:54.547140  173069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:51:54.547165  173069 kubeadm.go:157] found existing configuration files:
	
	I0203 11:51:54.547215  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:51:54.555393  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:51:54.555454  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:51:54.564221  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:51:54.572805  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:51:54.572854  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:51:54.581348  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:51:54.589519  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:51:54.589584  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:51:54.598204  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:51:54.606299  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:51:54.606354  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:51:54.614879  173069 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:51:54.681507  173069 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0203 11:51:54.681579  173069 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:51:54.833975  173069 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:51:54.834115  173069 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:51:54.834236  173069 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 11:51:55.015734  173069 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:51:55.017800  173069 out.go:235]   - Generating certificates and keys ...
	I0203 11:51:55.017908  173069 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:51:55.018029  173069 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:51:55.018147  173069 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 11:51:55.018236  173069 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0203 11:51:55.018336  173069 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 11:51:55.018420  173069 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0203 11:51:55.018509  173069 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0203 11:51:55.018605  173069 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0203 11:51:55.018770  173069 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 11:51:55.019144  173069 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 11:51:55.019209  173069 kubeadm.go:310] [certs] Using the existing "sa" key
	I0203 11:51:55.019307  173069 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:51:55.202633  173069 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:51:55.377699  173069 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:51:55.476193  173069 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:51:55.684690  173069 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:51:55.706297  173069 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:51:55.707243  173069 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:51:55.707310  173069 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:51:55.857226  173069 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:51:55.859128  173069 out.go:235]   - Booting up control plane ...
	I0203 11:51:55.859247  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:51:55.863942  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:51:55.865838  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:51:55.867142  173069 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:51:55.871067  173069 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 11:51:54.976708  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:54.977205  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:54.977268  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:54.977201  175880 retry.go:31] will retry after 1.395111029s: waiting for domain to come up
	I0203 11:51:56.374208  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:56.374601  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:56.374630  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:56.374585  175880 retry.go:31] will retry after 1.224641048s: waiting for domain to come up
	I0203 11:51:57.600995  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:57.601460  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:57.601486  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:57.601423  175880 retry.go:31] will retry after 2.153368032s: waiting for domain to come up
	I0203 11:51:59.757799  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:59.758428  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:59.758462  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:59.758378  175880 retry.go:31] will retry after 1.84005517s: waiting for domain to come up
	I0203 11:52:01.600091  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:01.600507  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:52:01.600557  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:52:01.600500  175880 retry.go:31] will retry after 3.236577417s: waiting for domain to come up
	I0203 11:52:04.840924  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:04.841396  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:52:04.841418  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:52:04.841372  175880 retry.go:31] will retry after 4.182823067s: waiting for domain to come up
	I0203 11:52:09.028277  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.028811  175844 main.go:141] libmachine: (newest-cni-586043) found domain IP: 192.168.72.151
	I0203 11:52:09.028840  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has current primary IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.028850  175844 main.go:141] libmachine: (newest-cni-586043) reserving static IP address...
	I0203 11:52:09.029304  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "newest-cni-586043", mac: "52:54:00:47:62:16", ip: "192.168.72.151"} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.029356  175844 main.go:141] libmachine: (newest-cni-586043) DBG | skip adding static IP to network mk-newest-cni-586043 - found existing host DHCP lease matching {name: "newest-cni-586043", mac: "52:54:00:47:62:16", ip: "192.168.72.151"}
	I0203 11:52:09.029375  175844 main.go:141] libmachine: (newest-cni-586043) reserved static IP address 192.168.72.151 for domain newest-cni-586043
	I0203 11:52:09.029391  175844 main.go:141] libmachine: (newest-cni-586043) waiting for SSH...
	I0203 11:52:09.029402  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Getting to WaitForSSH function...
	I0203 11:52:09.031306  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.031561  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.031584  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.031691  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Using SSH client type: external
	I0203 11:52:09.031718  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Using SSH private key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa (-rw-------)
	I0203 11:52:09.031754  175844 main.go:141] libmachine: (newest-cni-586043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0203 11:52:09.031768  175844 main.go:141] libmachine: (newest-cni-586043) DBG | About to run SSH command:
	I0203 11:52:09.031783  175844 main.go:141] libmachine: (newest-cni-586043) DBG | exit 0
	I0203 11:52:09.158021  175844 main.go:141] libmachine: (newest-cni-586043) DBG | SSH cmd err, output: <nil>: 
	I0203 11:52:09.158333  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetConfigRaw
	I0203 11:52:09.158996  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetIP
	I0203 11:52:09.161428  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.161811  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.161843  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.162127  175844 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/config.json ...
	I0203 11:52:09.162368  175844 machine.go:93] provisionDockerMachine start ...
	I0203 11:52:09.162395  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:09.162624  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.164802  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.165087  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.165126  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.165207  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.165381  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.165547  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.165670  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.165859  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:09.166136  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:09.166151  175844 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:52:09.274234  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 11:52:09.274266  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetMachineName
	I0203 11:52:09.274537  175844 buildroot.go:166] provisioning hostname "newest-cni-586043"
	I0203 11:52:09.274559  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetMachineName
	I0203 11:52:09.274783  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.277599  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.277966  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.278013  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.278316  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.278559  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.278755  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.278915  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.279070  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:09.279267  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:09.279283  175844 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-586043 && echo "newest-cni-586043" | sudo tee /etc/hostname
	I0203 11:52:09.400130  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-586043
	
	I0203 11:52:09.400158  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.402972  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.403283  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.403317  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.403501  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.403705  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.403913  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.404066  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.404242  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:09.404412  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:09.404436  175844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-586043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-586043/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-586043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:52:09.517890  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:52:09.517929  175844 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20354-109432/.minikube CaCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20354-109432/.minikube}
	I0203 11:52:09.517949  175844 buildroot.go:174] setting up certificates
	I0203 11:52:09.517959  175844 provision.go:84] configureAuth start
	I0203 11:52:09.517969  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetMachineName
	I0203 11:52:09.518273  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetIP
	I0203 11:52:09.520729  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.521035  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.521065  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.521252  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.523526  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.523855  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.523884  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.524049  175844 provision.go:143] copyHostCerts
	I0203 11:52:09.524110  175844 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem, removing ...
	I0203 11:52:09.524130  175844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem
	I0203 11:52:09.524200  175844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem (1078 bytes)
	I0203 11:52:09.524288  175844 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem, removing ...
	I0203 11:52:09.524296  175844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem
	I0203 11:52:09.524320  175844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem (1123 bytes)
	I0203 11:52:09.524376  175844 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem, removing ...
	I0203 11:52:09.524383  175844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem
	I0203 11:52:09.524402  175844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem (1679 bytes)
	I0203 11:52:09.524452  175844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem org=jenkins.newest-cni-586043 san=[127.0.0.1 192.168.72.151 localhost minikube newest-cni-586043]
	I0203 11:52:09.790829  175844 provision.go:177] copyRemoteCerts
	I0203 11:52:09.790896  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:52:09.790920  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.793962  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.794408  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.794440  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.794595  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.794829  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.794997  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.795367  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:09.881518  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 11:52:09.906901  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0203 11:52:09.931388  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 11:52:09.953430  175844 provision.go:87] duration metric: took 435.447216ms to configureAuth
	I0203 11:52:09.953471  175844 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:52:09.953676  175844 config.go:182] Loaded profile config "newest-cni-586043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:52:09.953755  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.956581  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.956917  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.956942  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.957055  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.957227  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.957362  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.957584  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.957788  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:09.958041  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:09.958063  175844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0203 11:52:10.176560  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0203 11:52:10.176589  175844 machine.go:96] duration metric: took 1.014204647s to provisionDockerMachine
	I0203 11:52:10.176602  175844 start.go:293] postStartSetup for "newest-cni-586043" (driver="kvm2")
	I0203 11:52:10.176613  175844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:52:10.176631  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.176961  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:52:10.176996  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:10.179737  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.180134  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.180164  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.180316  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:10.180547  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.180744  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:10.180895  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:10.266380  175844 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:52:10.270497  175844 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:52:10.270522  175844 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/addons for local assets ...
	I0203 11:52:10.270598  175844 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/files for local assets ...
	I0203 11:52:10.270682  175844 filesync.go:149] local asset: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem -> 1166062.pem in /etc/ssl/certs
	I0203 11:52:10.270792  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:52:10.281329  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:52:10.305215  175844 start.go:296] duration metric: took 128.597013ms for postStartSetup
	I0203 11:52:10.305259  175844 fix.go:56] duration metric: took 20.301585236s for fixHost
	I0203 11:52:10.305281  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:10.308015  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.308340  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.308363  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.308576  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:10.308776  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.308933  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.309106  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:10.309269  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:10.309477  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:10.309488  175844 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:52:10.418635  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738583530.394973154
	
	I0203 11:52:10.418670  175844 fix.go:216] guest clock: 1738583530.394973154
	I0203 11:52:10.418681  175844 fix.go:229] Guest: 2025-02-03 11:52:10.394973154 +0000 UTC Remote: 2025-02-03 11:52:10.305263637 +0000 UTC m=+20.446505021 (delta=89.709517ms)
	I0203 11:52:10.418749  175844 fix.go:200] guest clock delta is within tolerance: 89.709517ms
	I0203 11:52:10.418762  175844 start.go:83] releasing machines lock for "newest-cni-586043", held for 20.41511092s
	I0203 11:52:10.418798  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.419078  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetIP
	I0203 11:52:10.421707  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.422072  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.422103  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.422248  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.422797  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.422964  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.423053  175844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 11:52:10.423102  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:10.423134  175844 ssh_runner.go:195] Run: cat /version.json
	I0203 11:52:10.423157  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:10.425822  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.425947  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.426182  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.426204  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.426244  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.426265  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.426381  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:10.426506  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:10.426588  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.426696  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.426767  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:10.426837  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:10.426898  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:10.426931  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:10.534682  175844 ssh_runner.go:195] Run: systemctl --version
	I0203 11:52:10.540384  175844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0203 11:52:10.689697  175844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 11:52:10.695210  175844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:52:10.695274  175844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:52:10.710890  175844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:52:10.710920  175844 start.go:495] detecting cgroup driver to use...
	I0203 11:52:10.710996  175844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:52:10.726494  175844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:52:10.739926  175844 docker.go:217] disabling cri-docker service (if available) ...
	I0203 11:52:10.739983  175844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0203 11:52:10.753560  175844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0203 11:52:10.767625  175844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0203 11:52:10.883158  175844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0203 11:52:11.040505  175844 docker.go:233] disabling docker service ...
	I0203 11:52:11.040580  175844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0203 11:52:11.054421  175844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0203 11:52:11.067456  175844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0203 11:52:11.197256  175844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0203 11:52:11.326650  175844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0203 11:52:11.347953  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:52:11.365712  175844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0203 11:52:11.365783  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.375704  175844 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0203 11:52:11.375785  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.385498  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.395211  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.404733  175844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:52:11.414432  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.424057  175844 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.439837  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.449629  175844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:52:11.458405  175844 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:52:11.458478  175844 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:52:11.470212  175844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:52:11.480208  175844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:52:11.603955  175844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0203 11:52:11.686456  175844 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0203 11:52:11.686529  175844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0203 11:52:11.690882  175844 start.go:563] Will wait 60s for crictl version
	I0203 11:52:11.690934  175844 ssh_runner.go:195] Run: which crictl
	I0203 11:52:11.694501  175844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:52:11.731809  175844 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0203 11:52:11.731905  175844 ssh_runner.go:195] Run: crio --version
	I0203 11:52:11.761777  175844 ssh_runner.go:195] Run: crio --version
	I0203 11:52:11.793653  175844 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0203 11:52:11.795062  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetIP
	I0203 11:52:11.797753  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:11.798098  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:11.798125  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:11.798347  175844 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0203 11:52:11.802272  175844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:52:11.815917  175844 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0203 11:52:11.817298  175844 kubeadm.go:883] updating cluster {Name:newest-cni-586043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 11:52:11.817452  175844 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:52:11.817531  175844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:52:11.850957  175844 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0203 11:52:11.851042  175844 ssh_runner.go:195] Run: which lz4
	I0203 11:52:11.854770  175844 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0203 11:52:11.858671  175844 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 11:52:11.858703  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0203 11:52:13.043197  175844 crio.go:462] duration metric: took 1.188462639s to copy over tarball
	I0203 11:52:13.043293  175844 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0203 11:52:15.160894  175844 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.117557833s)
	I0203 11:52:15.160939  175844 crio.go:469] duration metric: took 2.117706974s to extract the tarball
	I0203 11:52:15.160949  175844 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0203 11:52:15.198286  175844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:52:15.239287  175844 crio.go:514] all images are preloaded for cri-o runtime.
	I0203 11:52:15.239321  175844 cache_images.go:84] Images are preloaded, skipping loading
	I0203 11:52:15.239330  175844 kubeadm.go:934] updating node { 192.168.72.151 8443 v1.32.1 crio true true} ...
	I0203 11:52:15.239461  175844 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-586043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:52:15.239619  175844 ssh_runner.go:195] Run: crio config
	I0203 11:52:15.287775  175844 cni.go:84] Creating CNI manager for ""
	I0203 11:52:15.287800  175844 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:52:15.287810  175844 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0203 11:52:15.287833  175844 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-586043 NodeName:newest-cni-586043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 11:52:15.287959  175844 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-586043"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 11:52:15.288022  175844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 11:52:15.297463  175844 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 11:52:15.297537  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 11:52:15.306437  175844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0203 11:52:15.321420  175844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:52:15.336615  175844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0203 11:52:15.352231  175844 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I0203 11:52:15.355798  175844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:52:15.367061  175844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:52:15.495735  175844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:52:15.512622  175844 certs.go:68] Setting up /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043 for IP: 192.168.72.151
	I0203 11:52:15.512651  175844 certs.go:194] generating shared ca certs ...
	I0203 11:52:15.512674  175844 certs.go:226] acquiring lock for ca certs: {Name:mkceafe81f89678b7cbc2a7f6faab4e784fcb207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:52:15.512839  175844 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key
	I0203 11:52:15.512893  175844 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key
	I0203 11:52:15.512907  175844 certs.go:256] generating profile certs ...
	I0203 11:52:15.513010  175844 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/client.key
	I0203 11:52:15.513093  175844 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/apiserver.key.63294795
	I0203 11:52:15.513150  175844 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/proxy-client.key
	I0203 11:52:15.513307  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem (1338 bytes)
	W0203 11:52:15.513348  175844 certs.go:480] ignoring /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606_empty.pem, impossibly tiny 0 bytes
	I0203 11:52:15.513370  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 11:52:15.513458  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem (1078 bytes)
	I0203 11:52:15.513498  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem (1123 bytes)
	I0203 11:52:15.513536  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem (1679 bytes)
	I0203 11:52:15.513590  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:52:15.514532  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:52:15.549975  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 11:52:15.586087  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:52:15.616774  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 11:52:15.650861  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0203 11:52:15.677800  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 11:52:15.702344  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:52:15.724326  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 11:52:15.746037  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /usr/share/ca-certificates/1166062.pem (1708 bytes)
	I0203 11:52:15.768136  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:52:15.790221  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem --> /usr/share/ca-certificates/116606.pem (1338 bytes)
	I0203 11:52:15.812120  175844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:52:15.828614  175844 ssh_runner.go:195] Run: openssl version
	I0203 11:52:15.834594  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116606.pem && ln -fs /usr/share/ca-certificates/116606.pem /etc/ssl/certs/116606.pem"
	I0203 11:52:15.845364  175844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116606.pem
	I0203 11:52:15.849706  175844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:41 /usr/share/ca-certificates/116606.pem
	I0203 11:52:15.849770  175844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116606.pem
	I0203 11:52:15.855545  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116606.pem /etc/ssl/certs/51391683.0"
	I0203 11:52:15.866161  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1166062.pem && ln -fs /usr/share/ca-certificates/1166062.pem /etc/ssl/certs/1166062.pem"
	I0203 11:52:15.876957  175844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1166062.pem
	I0203 11:52:15.881522  175844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:41 /usr/share/ca-certificates/1166062.pem
	I0203 11:52:15.881602  175844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1166062.pem
	I0203 11:52:15.887046  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1166062.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:52:15.897606  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:52:15.908452  175844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:52:15.912883  175844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:52:15.912951  175844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:52:15.918459  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:52:15.928802  175844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:52:15.933142  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0203 11:52:15.938806  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0203 11:52:15.944291  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0203 11:52:15.949834  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0203 11:52:15.955213  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0203 11:52:15.960551  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0203 11:52:15.965905  175844 kubeadm.go:392] StartCluster: {Name:newest-cni-586043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:52:15.965992  175844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 11:52:15.966055  175844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:52:16.005648  175844 cri.go:89] found id: ""
	I0203 11:52:16.005716  175844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 11:52:16.015599  175844 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0203 11:52:16.015623  175844 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0203 11:52:16.015672  175844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0203 11:52:16.024927  175844 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0203 11:52:16.025481  175844 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-586043" does not appear in /home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:52:16.025667  175844 kubeconfig.go:62] /home/jenkins/minikube-integration/20354-109432/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-586043" cluster setting kubeconfig missing "newest-cni-586043" context setting]
	I0203 11:52:16.025988  175844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/kubeconfig: {Name:mkcb7c4c45c6b828504faaa9fea59b0bb0855286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:52:16.028966  175844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0203 11:52:16.038295  175844 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.151
	I0203 11:52:16.038346  175844 kubeadm.go:1160] stopping kube-system containers ...
	I0203 11:52:16.038363  175844 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0203 11:52:16.038415  175844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:52:16.076934  175844 cri.go:89] found id: ""
	I0203 11:52:16.077021  175844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 11:52:16.093360  175844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:52:16.102923  175844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:52:16.102952  175844 kubeadm.go:157] found existing configuration files:
	
	I0203 11:52:16.103002  175844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:52:16.111845  175844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:52:16.111910  175844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:52:16.121141  175844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:52:16.129822  175844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:52:16.129886  175844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:52:16.138692  175844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:52:16.147297  175844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:52:16.147368  175844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:52:16.157136  175844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:52:16.166841  175844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:52:16.166927  175844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:52:16.176387  175844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:52:16.185620  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:16.308286  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:17.428161  175844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.119820981s)
	I0203 11:52:17.428197  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:17.617442  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:17.710553  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:17.786236  175844 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:52:17.786327  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:18.287335  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:18.787276  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:19.287247  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:19.787249  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:19.832243  175844 api_server.go:72] duration metric: took 2.046005993s to wait for apiserver process to appear ...
	I0203 11:52:19.832296  175844 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:52:19.832324  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:19.832848  175844 api_server.go:269] stopped: https://192.168.72.151:8443/healthz: Get "https://192.168.72.151:8443/healthz": dial tcp 192.168.72.151:8443: connect: connection refused
	I0203 11:52:20.333113  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:22.593112  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 11:52:22.593149  175844 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 11:52:22.593168  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:22.615767  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 11:52:22.615799  175844 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 11:52:22.833274  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:22.838649  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 11:52:22.838680  175844 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 11:52:23.333376  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:23.338020  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 11:52:23.338047  175844 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 11:52:23.832467  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:23.836670  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0203 11:52:23.842741  175844 api_server.go:141] control plane version: v1.32.1
	I0203 11:52:23.842765  175844 api_server.go:131] duration metric: took 4.010461718s to wait for apiserver health ...
	I0203 11:52:23.842774  175844 cni.go:84] Creating CNI manager for ""
	I0203 11:52:23.842781  175844 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:52:23.844446  175844 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0203 11:52:23.845620  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 11:52:23.878399  175844 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0203 11:52:23.908467  175844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:52:23.916662  175844 system_pods.go:59] 8 kube-system pods found
	I0203 11:52:23.916703  175844 system_pods.go:61] "coredns-668d6bf9bc-cr5dw" [3d1b7381-6b42-4160-ba9d-6fddc2408174] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0203 11:52:23.916712  175844 system_pods.go:61] "etcd-newest-cni-586043" [16317397-91b4-459d-a91f-ce10dc19f0c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 11:52:23.916721  175844 system_pods.go:61] "kube-apiserver-newest-cni-586043" [79bd9928-7593-4eda-a9d6-fe3fe263c33a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 11:52:23.916727  175844 system_pods.go:61] "kube-controller-manager-newest-cni-586043" [8a00cc32-1347-42f0-b92b-ecf548236642] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 11:52:23.916733  175844 system_pods.go:61] "kube-proxy-c4bgm" [1a4f7c54-c137-401a-b004-2c93f251a646] Running
	I0203 11:52:23.916738  175844 system_pods.go:61] "kube-scheduler-newest-cni-586043" [e796e345-ebf7-4e6f-86d8-357cade7d05b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0203 11:52:23.916743  175844 system_pods.go:61] "metrics-server-f79f97bbb-w4v6r" [5c20a6e1-46c0-43fb-8057-90f4d2fc6d7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0203 11:52:23.916750  175844 system_pods.go:61] "storage-provisioner" [9720ea0d-98d4-4916-8e71-71a4e7a080d7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0203 11:52:23.916765  175844 system_pods.go:74] duration metric: took 8.272337ms to wait for pod list to return data ...
	I0203 11:52:23.916777  175844 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:52:23.920379  175844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:52:23.920405  175844 node_conditions.go:123] node cpu capacity is 2
	I0203 11:52:23.920416  175844 node_conditions.go:105] duration metric: took 3.634031ms to run NodePressure ...
	I0203 11:52:23.920432  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:24.231056  175844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 11:52:24.244201  175844 ops.go:34] apiserver oom_adj: -16
	I0203 11:52:24.244230  175844 kubeadm.go:597] duration metric: took 8.228599887s to restartPrimaryControlPlane
	I0203 11:52:24.244242  175844 kubeadm.go:394] duration metric: took 8.278345475s to StartCluster
	I0203 11:52:24.244264  175844 settings.go:142] acquiring lock: {Name:mk7f08542cc4ae303b222901a9d369cc0753d51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:52:24.244357  175844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:52:24.245400  175844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/kubeconfig: {Name:mkcb7c4c45c6b828504faaa9fea59b0bb0855286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:52:24.245703  175844 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 11:52:24.245788  175844 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0203 11:52:24.245905  175844 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-586043"
	I0203 11:52:24.245926  175844 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-586043"
	W0203 11:52:24.245937  175844 addons.go:247] addon storage-provisioner should already be in state true
	I0203 11:52:24.245931  175844 addons.go:69] Setting default-storageclass=true in profile "newest-cni-586043"
	I0203 11:52:24.245943  175844 addons.go:69] Setting metrics-server=true in profile "newest-cni-586043"
	I0203 11:52:24.245967  175844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-586043"
	I0203 11:52:24.245974  175844 addons.go:238] Setting addon metrics-server=true in "newest-cni-586043"
	I0203 11:52:24.245979  175844 addons.go:69] Setting dashboard=true in profile "newest-cni-586043"
	I0203 11:52:24.246021  175844 config.go:182] Loaded profile config "newest-cni-586043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:52:24.246026  175844 addons.go:238] Setting addon dashboard=true in "newest-cni-586043"
	W0203 11:52:24.246043  175844 addons.go:247] addon dashboard should already be in state true
	W0203 11:52:24.246056  175844 addons.go:247] addon metrics-server should already be in state true
	I0203 11:52:24.246096  175844 host.go:66] Checking if "newest-cni-586043" exists ...
	I0203 11:52:24.246136  175844 host.go:66] Checking if "newest-cni-586043" exists ...
	I0203 11:52:24.245971  175844 host.go:66] Checking if "newest-cni-586043" exists ...
	I0203 11:52:24.246487  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.246541  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.246546  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.246575  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.246627  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.246637  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.246653  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.246581  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.249614  175844 out.go:177] * Verifying Kubernetes components...
	I0203 11:52:24.250962  175844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:52:24.264646  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0203 11:52:24.265348  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.266025  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.266044  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.267072  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40539
	I0203 11:52:24.267076  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33521
	I0203 11:52:24.267102  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.267178  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0203 11:52:24.267615  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.267668  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.267618  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.267685  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.267825  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.268155  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.268178  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.268196  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.268242  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.268528  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.268584  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.269015  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.269099  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.269135  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.269171  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.269197  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.269701  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.270284  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.270327  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.273950  175844 addons.go:238] Setting addon default-storageclass=true in "newest-cni-586043"
	W0203 11:52:24.273978  175844 addons.go:247] addon default-storageclass should already be in state true
	I0203 11:52:24.274042  175844 host.go:66] Checking if "newest-cni-586043" exists ...
	I0203 11:52:24.274412  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.274462  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.289025  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39021
	I0203 11:52:24.289035  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44393
	I0203 11:52:24.289630  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.289674  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.290176  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.290206  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.290318  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.290332  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.290650  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.290878  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.290901  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.291096  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.293650  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:24.293656  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:24.295600  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I0203 11:52:24.296118  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.296790  175844 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0203 11:52:24.297034  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.297192  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.297586  175844 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0203 11:52:24.297621  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.298235  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.298334  175844 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0203 11:52:24.298350  175844 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0203 11:52:24.298376  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:24.298562  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41007
	I0203 11:52:24.298934  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.299547  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.299566  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.299918  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.300034  175844 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0203 11:52:24.300652  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.300697  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.301008  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0203 11:52:24.301026  175844 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0203 11:52:24.301046  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:24.301895  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:24.303400  175844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:52:24.304035  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.304500  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.304487  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:24.304531  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.304644  175844 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:52:24.304655  175844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 11:52:24.304667  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:24.305128  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:24.305143  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.305147  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:24.305387  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:24.305405  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:24.305612  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:24.305657  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:24.305776  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:24.305795  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:24.305907  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:24.307560  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.307791  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:24.307818  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.307956  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:24.308107  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:24.308228  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:24.308344  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:24.341888  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0203 11:52:24.342380  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.342876  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.342908  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.343222  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.343423  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.345056  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:24.345281  175844 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 11:52:24.345300  175844 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 11:52:24.345321  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:24.348062  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.348521  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:24.348557  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.348704  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:24.348944  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:24.349105  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:24.349239  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:24.423706  175844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:52:24.438635  175844 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:52:24.438729  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:24.451466  175844 api_server.go:72] duration metric: took 205.720039ms to wait for apiserver process to appear ...
	I0203 11:52:24.451494  175844 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:52:24.451512  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:24.455975  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0203 11:52:24.456944  175844 api_server.go:141] control plane version: v1.32.1
	I0203 11:52:24.456960  175844 api_server.go:131] duration metric: took 5.461365ms to wait for apiserver health ...
	I0203 11:52:24.456967  175844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:52:24.462575  175844 system_pods.go:59] 8 kube-system pods found
	I0203 11:52:24.462602  175844 system_pods.go:61] "coredns-668d6bf9bc-cr5dw" [3d1b7381-6b42-4160-ba9d-6fddc2408174] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0203 11:52:24.462609  175844 system_pods.go:61] "etcd-newest-cni-586043" [16317397-91b4-459d-a91f-ce10dc19f0c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 11:52:24.462618  175844 system_pods.go:61] "kube-apiserver-newest-cni-586043" [79bd9928-7593-4eda-a9d6-fe3fe263c33a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 11:52:24.462624  175844 system_pods.go:61] "kube-controller-manager-newest-cni-586043" [8a00cc32-1347-42f0-b92b-ecf548236642] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 11:52:24.462630  175844 system_pods.go:61] "kube-proxy-c4bgm" [1a4f7c54-c137-401a-b004-2c93f251a646] Running
	I0203 11:52:24.462636  175844 system_pods.go:61] "kube-scheduler-newest-cni-586043" [e796e345-ebf7-4e6f-86d8-357cade7d05b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0203 11:52:24.462640  175844 system_pods.go:61] "metrics-server-f79f97bbb-w4v6r" [5c20a6e1-46c0-43fb-8057-90f4d2fc6d7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0203 11:52:24.462646  175844 system_pods.go:61] "storage-provisioner" [9720ea0d-98d4-4916-8e71-71a4e7a080d7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0203 11:52:24.462651  175844 system_pods.go:74] duration metric: took 5.679512ms to wait for pod list to return data ...
	I0203 11:52:24.462661  175844 default_sa.go:34] waiting for default service account to be created ...
	I0203 11:52:24.464973  175844 default_sa.go:45] found service account: "default"
	I0203 11:52:24.464991  175844 default_sa.go:55] duration metric: took 2.324944ms for default service account to be created ...
	I0203 11:52:24.465002  175844 kubeadm.go:582] duration metric: took 219.259944ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0203 11:52:24.465020  175844 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:52:24.467037  175844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:52:24.467054  175844 node_conditions.go:123] node cpu capacity is 2
	I0203 11:52:24.467064  175844 node_conditions.go:105] duration metric: took 2.039421ms to run NodePressure ...
	I0203 11:52:24.467074  175844 start.go:241] waiting for startup goroutines ...
	I0203 11:52:24.510558  175844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 11:52:24.518267  175844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0203 11:52:24.518302  175844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0203 11:52:24.539840  175844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0203 11:52:24.539866  175844 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0203 11:52:24.569697  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0203 11:52:24.569727  175844 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0203 11:52:24.583824  175844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:52:24.598897  175844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0203 11:52:24.598921  175844 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0203 11:52:24.610164  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0203 11:52:24.610188  175844 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0203 11:52:24.677539  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0203 11:52:24.677569  175844 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0203 11:52:24.700565  175844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0203 11:52:24.799702  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0203 11:52:24.799733  175844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0203 11:52:24.916536  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0203 11:52:24.916568  175844 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0203 11:52:25.033797  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0203 11:52:25.033826  175844 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0203 11:52:25.062256  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:25.062298  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:25.062596  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:25.062614  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:25.062622  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:25.062629  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:25.062867  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:25.062887  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:25.091731  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:25.091759  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:25.092053  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:25.092073  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:25.092088  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:25.130272  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0203 11:52:25.130306  175844 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0203 11:52:25.184756  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0203 11:52:25.184789  175844 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0203 11:52:25.245270  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0203 11:52:25.245304  175844 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0203 11:52:25.294755  175844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0203 11:52:26.053809  175844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.469944422s)
	I0203 11:52:26.053870  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.053884  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.054221  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:26.054266  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.054293  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.054313  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.054324  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.054556  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:26.054575  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.054586  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.087724  175844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.387111638s)
	I0203 11:52:26.087790  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.087808  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.088123  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.088159  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.088184  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.088200  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:26.088206  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.088502  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.088531  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.088545  175844 addons.go:479] Verifying addon metrics-server=true in "newest-cni-586043"
	I0203 11:52:26.531533  175844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.236731103s)
	I0203 11:52:26.531586  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.531597  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.532020  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:26.532039  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.532055  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.532069  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.532081  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.532328  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.532345  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.533858  175844 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-586043 addons enable metrics-server
	
	I0203 11:52:26.535168  175844 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0203 11:52:26.536259  175844 addons.go:514] duration metric: took 2.290478763s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0203 11:52:26.536307  175844 start.go:246] waiting for cluster config update ...
	I0203 11:52:26.536322  175844 start.go:255] writing updated cluster config ...
	I0203 11:52:26.536548  175844 ssh_runner.go:195] Run: rm -f paused
	I0203 11:52:26.583516  175844 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0203 11:52:26.585070  175844 out.go:177] * Done! kubectl is now configured to use "newest-cni-586043" cluster and "default" namespace by default
	I0203 11:52:35.872135  173069 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0203 11:52:35.872966  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:52:35.873172  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:52:40.873720  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:52:40.873968  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:52:50.874520  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:52:50.874761  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:53:10.875767  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:53:10.876032  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:53:50.878348  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:53:50.878572  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:53:50.878585  173069 kubeadm.go:310] 
	I0203 11:53:50.878677  173069 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0203 11:53:50.878746  173069 kubeadm.go:310] 		timed out waiting for the condition
	I0203 11:53:50.878756  173069 kubeadm.go:310] 
	I0203 11:53:50.878805  173069 kubeadm.go:310] 	This error is likely caused by:
	I0203 11:53:50.878848  173069 kubeadm.go:310] 		- The kubelet is not running
	I0203 11:53:50.878993  173069 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 11:53:50.879004  173069 kubeadm.go:310] 
	I0203 11:53:50.879145  173069 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 11:53:50.879192  173069 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0203 11:53:50.879235  173069 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0203 11:53:50.879245  173069 kubeadm.go:310] 
	I0203 11:53:50.879390  173069 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 11:53:50.879507  173069 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 11:53:50.879517  173069 kubeadm.go:310] 
	I0203 11:53:50.879660  173069 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0203 11:53:50.879782  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0203 11:53:50.879904  173069 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0203 11:53:50.880019  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0203 11:53:50.880033  173069 kubeadm.go:310] 
	I0203 11:53:50.880322  173069 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:53:50.880397  173069 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 11:53:50.880465  173069 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0203 11:53:50.880620  173069 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0203 11:53:50.880666  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0203 11:53:56.208593  173069 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.327900088s)
	I0203 11:53:56.208687  173069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:53:56.222067  173069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:53:56.231274  173069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:53:56.231296  173069 kubeadm.go:157] found existing configuration files:
	
	I0203 11:53:56.231344  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:53:56.240522  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:53:56.240587  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:53:56.249755  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:53:56.258586  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:53:56.258645  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:53:56.267974  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:53:56.276669  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:53:56.276720  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:53:56.285661  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:53:56.294673  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:53:56.294734  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:53:56.303819  173069 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:53:56.510714  173069 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:55:52.911681  173069 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 11:55:52.911777  173069 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0203 11:55:52.913157  173069 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0203 11:55:52.913224  173069 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:55:52.913299  173069 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:55:52.913463  173069 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:55:52.913598  173069 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 11:55:52.913672  173069 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:55:52.915764  173069 out.go:235]   - Generating certificates and keys ...
	I0203 11:55:52.915857  173069 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:55:52.915908  173069 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:55:52.915975  173069 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 11:55:52.916023  173069 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0203 11:55:52.916077  173069 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 11:55:52.916150  173069 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0203 11:55:52.916233  173069 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0203 11:55:52.916309  173069 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0203 11:55:52.916424  173069 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 11:55:52.916508  173069 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 11:55:52.916542  173069 kubeadm.go:310] [certs] Using the existing "sa" key
	I0203 11:55:52.916589  173069 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:55:52.916635  173069 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:55:52.916682  173069 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:55:52.916747  173069 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:55:52.916798  173069 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:55:52.916898  173069 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:55:52.916991  173069 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:55:52.917027  173069 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:55:52.917082  173069 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:55:52.918947  173069 out.go:235]   - Booting up control plane ...
	I0203 11:55:52.919052  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:55:52.919135  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:55:52.919213  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:55:52.919298  173069 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:55:52.919440  173069 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 11:55:52.919509  173069 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0203 11:55:52.919578  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.919738  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.919799  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.919950  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920007  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.920158  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920230  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.920452  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920558  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.920806  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920815  173069 kubeadm.go:310] 
	I0203 11:55:52.920849  173069 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0203 11:55:52.920884  173069 kubeadm.go:310] 		timed out waiting for the condition
	I0203 11:55:52.920891  173069 kubeadm.go:310] 
	I0203 11:55:52.920924  173069 kubeadm.go:310] 	This error is likely caused by:
	I0203 11:55:52.920954  173069 kubeadm.go:310] 		- The kubelet is not running
	I0203 11:55:52.921051  173069 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 11:55:52.921066  173069 kubeadm.go:310] 
	I0203 11:55:52.921160  173069 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 11:55:52.921199  173069 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0203 11:55:52.921228  173069 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0203 11:55:52.921235  173069 kubeadm.go:310] 
	I0203 11:55:52.921355  173069 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 11:55:52.921465  173069 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 11:55:52.921476  173069 kubeadm.go:310] 
	I0203 11:55:52.921595  173069 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0203 11:55:52.921666  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0203 11:55:52.921725  173069 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0203 11:55:52.921781  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0203 11:55:52.921820  173069 kubeadm.go:310] 
	I0203 11:55:52.921866  173069 kubeadm.go:394] duration metric: took 8m3.159723737s to StartCluster
	I0203 11:55:52.921917  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:55:52.921979  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:55:52.965327  173069 cri.go:89] found id: ""
	I0203 11:55:52.965360  173069 logs.go:282] 0 containers: []
	W0203 11:55:52.965370  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:55:52.965377  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:55:52.965429  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:55:52.999197  173069 cri.go:89] found id: ""
	I0203 11:55:52.999224  173069 logs.go:282] 0 containers: []
	W0203 11:55:52.999233  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:55:52.999239  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:55:52.999290  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:55:53.033201  173069 cri.go:89] found id: ""
	I0203 11:55:53.033231  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.033239  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:55:53.033245  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:55:53.033298  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:55:53.069227  173069 cri.go:89] found id: ""
	I0203 11:55:53.069262  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.069274  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:55:53.069282  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:55:53.069361  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:55:53.102418  173069 cri.go:89] found id: ""
	I0203 11:55:53.102448  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.102460  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:55:53.102467  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:55:53.102595  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:55:53.134815  173069 cri.go:89] found id: ""
	I0203 11:55:53.134846  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.134859  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:55:53.134865  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:55:53.134916  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:55:53.184017  173069 cri.go:89] found id: ""
	I0203 11:55:53.184063  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.184075  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:55:53.184086  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:55:53.184180  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:55:53.218584  173069 cri.go:89] found id: ""
	I0203 11:55:53.218620  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.218630  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:55:53.218642  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:55:53.218656  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:55:53.267577  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:55:53.267624  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:55:53.280882  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:55:53.280915  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:55:53.352344  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:55:53.352371  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:55:53.352385  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:55:53.451451  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:55:53.451495  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0203 11:55:53.488076  173069 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0203 11:55:53.488133  173069 out.go:270] * 
	W0203 11:55:53.488199  173069 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 11:55:53.488213  173069 out.go:270] * 
	W0203 11:55:53.489069  173069 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 11:55:53.492291  173069 out.go:201] 
	W0203 11:55:53.493552  173069 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 11:55:53.493606  173069 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0203 11:55:53.493647  173069 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0203 11:55:53.494859  173069 out.go:201] 
	
	
	==> CRI-O <==
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.518498688Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738583754518472620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1cdf7ffb-c64f-402f-b0cc-4fd01dce4f5a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.519067703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27c42e02-0437-4381-9b2d-693c280da6da name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.519140244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27c42e02-0437-4381-9b2d-693c280da6da name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.519174859Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=27c42e02-0437-4381-9b2d-693c280da6da name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.549240260Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9caf944-43a6-4595-9686-e462d0e22885 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.549332695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9caf944-43a6-4595-9686-e462d0e22885 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.550391654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1bd75487-4ec5-4797-b8be-afe59f54c0ce name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.550840107Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738583754550814732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1bd75487-4ec5-4797-b8be-afe59f54c0ce name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.551312397Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf4bad04-2f4c-4a43-a223-90575169c079 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.551376645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf4bad04-2f4c-4a43-a223-90575169c079 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.551415875Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cf4bad04-2f4c-4a43-a223-90575169c079 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.583128247Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c4ecc1a-4ca8-4719-aa4b-c30d21cd2343 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.583226929Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c4ecc1a-4ca8-4719-aa4b-c30d21cd2343 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.584608227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed83231d-d8cf-41e2-9cd5-b37b6576cf54 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.584996783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738583754584972334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed83231d-d8cf-41e2-9cd5-b37b6576cf54 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.585455991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ffb57a64-7bf6-4222-8be4-92fc1dffd327 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.585528259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ffb57a64-7bf6-4222-8be4-92fc1dffd327 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.585603033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ffb57a64-7bf6-4222-8be4-92fc1dffd327 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.616124627Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1789e699-ec32-4e19-9843-cd42f6fdda15 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.616212494Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1789e699-ec32-4e19-9843-cd42f6fdda15 name=/runtime.v1.RuntimeService/Version
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.617443373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=030d6456-dec3-4fa4-a6e5-52ca53eb2936 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.617910214Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738583754617887492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=030d6456-dec3-4fa4-a6e5-52ca53eb2936 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.618786418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01688a5b-a999-4a48-b769-e95a5ed26692 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.618868862Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01688a5b-a999-4a48-b769-e95a5ed26692 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 11:55:54 old-k8s-version-517711 crio[637]: time="2025-02-03 11:55:54.618909396Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=01688a5b-a999-4a48-b769-e95a5ed26692 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb 3 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054598] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038441] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.998629] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.169563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.572597] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.331149] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.081564] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074399] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.170591] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.142363] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.233678] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +6.346278] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.064365] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.290562] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[Feb 3 11:48] kauditd_printk_skb: 46 callbacks suppressed
	[Feb 3 11:51] systemd-fstab-generator[5068]: Ignoring "noauto" option for root device
	[Feb 3 11:53] systemd-fstab-generator[5353]: Ignoring "noauto" option for root device
	[  +0.065405] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 11:55:54 up 8 min,  0 users,  load average: 0.01, 0.10, 0.08
	Linux old-k8s-version-517711 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00002ac40, 0xc0001020c0)
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:218
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]: goroutine 145 [syscall]:
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]: syscall.Syscall6(0xe8, 0xc, 0xc000c59b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xc, 0xc000c59b6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0009dcde0, 0x0, 0x0, 0x0)
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0007fb630)
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5533]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Feb 03 11:55:53 old-k8s-version-517711 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 03 11:55:53 old-k8s-version-517711 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 03 11:55:53 old-k8s-version-517711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Feb 03 11:55:53 old-k8s-version-517711 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 03 11:55:53 old-k8s-version-517711 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5599]: I0203 11:55:53.908942    5599 server.go:416] Version: v1.20.0
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5599]: I0203 11:55:53.909258    5599 server.go:837] Client rotation is on, will bootstrap in background
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5599]: I0203 11:55:53.911628    5599 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5599]: W0203 11:55:53.912844    5599 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 03 11:55:53 old-k8s-version-517711 kubelet[5599]: I0203 11:55:53.912910    5599 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-517711 -n old-k8s-version-517711
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 2 (237.854867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-517711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (514.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:55:58.251996  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:56:33.124090  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/default-k8s-diff-port-138645/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:56:41.616462  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:57:01.720461  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:58:09.948408  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/no-preload-085638/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:58:22.483681  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:58:37.657499  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/no-preload-085638/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:58:49.263374  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/default-k8s-diff-port-138645/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:59:00.131001  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:59:08.318303  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:59:16.965528  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/default-k8s-diff-port-138645/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:59:45.547196  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 11:59:49.329524  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:00:09.072103  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:00:31.384195  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:00:42.118117  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:00:58.251220  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:01:12.394357  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:01:32.137174  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:01:41.616557  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:02:01.720294  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:02:21.316554  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:03:04.681436  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:03:09.948476  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/no-preload-085638/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:03:22.483631  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:03:24.784840  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:03:45.202243  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:03:49.263510  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/default-k8s-diff-port-138645/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:04:00.130868  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:04:08.317347  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:04:49.329567  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-517711 -n old-k8s-version-517711
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 2 (263.47004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-517711" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 2 (248.076505ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-517711 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-691067 image list                          | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-691067                                  | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-691067                                  | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-691067                                  | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	| delete  | -p embed-certs-691067                                  | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	| start   | -p newest-cni-586043 --memory=2200 --alsologtostderr   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | no-preload-085638 image list                           | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-085638                                   | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-085638                                   | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-085638                                   | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	| delete  | -p no-preload-085638                                   | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	| image   | default-k8s-diff-port-138645                           | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | default-k8s-diff-port-138645                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | default-k8s-diff-port-138645                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | default-k8s-diff-port-138645                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | default-k8s-diff-port-138645                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-586043             | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-586043                  | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-586043 --memory=2200 --alsologtostderr   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-586043 image list                           | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	| delete  | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 11:51:49
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 11:51:49.897155  175844 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:51:49.897275  175844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:51:49.897287  175844 out.go:358] Setting ErrFile to fd 2...
	I0203 11:51:49.897291  175844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:51:49.897486  175844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:51:49.898057  175844 out.go:352] Setting JSON to false
	I0203 11:51:49.898943  175844 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9252,"bootTime":1738574258,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 11:51:49.899051  175844 start.go:139] virtualization: kvm guest
	I0203 11:51:49.901414  175844 out.go:177] * [newest-cni-586043] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 11:51:49.903016  175844 notify.go:220] Checking for updates...
	I0203 11:51:49.903024  175844 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:51:49.904418  175844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:51:49.905475  175844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:51:49.906695  175844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:51:49.907794  175844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 11:51:49.909017  175844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:51:49.910440  175844 config.go:182] Loaded profile config "newest-cni-586043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:51:49.910830  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:51:49.910906  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:51:49.925489  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
	I0203 11:51:49.925936  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:51:49.926599  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:51:49.926617  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:51:49.926982  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:51:49.927181  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:51:49.927443  175844 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:51:49.927733  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:51:49.927780  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:51:49.942754  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I0203 11:51:49.943278  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:51:49.943789  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:51:49.943810  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:51:49.944116  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:51:49.944333  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:51:49.982359  175844 out.go:177] * Using the kvm2 driver based on existing profile
	I0203 11:51:49.983564  175844 start.go:297] selected driver: kvm2
	I0203 11:51:49.983579  175844 start.go:901] validating driver "kvm2" against &{Name:newest-cni-586043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:51:49.983680  175844 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:51:49.984357  175844 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:51:49.984460  175844 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 11:51:49.999536  175844 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 11:51:49.999973  175844 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0203 11:51:50.000007  175844 cni.go:84] Creating CNI manager for ""
	I0203 11:51:50.000057  175844 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:51:50.000113  175844 start.go:340] cluster config:
	{Name:newest-cni-586043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:51:50.000234  175844 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:51:50.001824  175844 out.go:177] * Starting "newest-cni-586043" primary control-plane node in "newest-cni-586043" cluster
	I0203 11:51:50.003075  175844 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:51:50.003128  175844 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0203 11:51:50.003141  175844 cache.go:56] Caching tarball of preloaded images
	I0203 11:51:50.003229  175844 preload.go:172] Found /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0203 11:51:50.003240  175844 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0203 11:51:50.003363  175844 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/config.json ...
	I0203 11:51:50.003582  175844 start.go:360] acquireMachinesLock for newest-cni-586043: {Name:mk4d774b88f87fe0539ca3e30dd98aae8a4d5437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:51:50.003637  175844 start.go:364] duration metric: took 33.224µs to acquireMachinesLock for "newest-cni-586043"
	I0203 11:51:50.003664  175844 start.go:96] Skipping create...Using existing machine configuration
	I0203 11:51:50.003675  175844 fix.go:54] fixHost starting: 
	I0203 11:51:50.003993  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:51:50.004037  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:51:50.018719  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0203 11:51:50.020226  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:51:50.020848  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:51:50.020873  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:51:50.021243  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:51:50.021461  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:51:50.021601  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:51:50.023310  175844 fix.go:112] recreateIfNeeded on newest-cni-586043: state=Stopped err=<nil>
	I0203 11:51:50.023355  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	W0203 11:51:50.023508  175844 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 11:51:50.025216  175844 out.go:177] * Restarting existing kvm2 VM for "newest-cni-586043" ...
	I0203 11:51:47.958652  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:47.972404  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:47.972476  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:48.003924  173069 cri.go:89] found id: ""
	I0203 11:51:48.003952  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.003963  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:48.003972  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:48.004036  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:48.035462  173069 cri.go:89] found id: ""
	I0203 11:51:48.035495  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.035507  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:48.035516  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:48.035571  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:48.066226  173069 cri.go:89] found id: ""
	I0203 11:51:48.066255  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.066266  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:48.066274  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:48.066340  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:48.097119  173069 cri.go:89] found id: ""
	I0203 11:51:48.097150  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.097162  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:48.097170  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:48.097234  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:48.129010  173069 cri.go:89] found id: ""
	I0203 11:51:48.129049  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.129061  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:48.129069  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:48.129128  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:48.172322  173069 cri.go:89] found id: ""
	I0203 11:51:48.172355  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.172363  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:48.172371  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:48.172442  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:48.203549  173069 cri.go:89] found id: ""
	I0203 11:51:48.203579  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.203587  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:48.203594  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:48.203645  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:48.234281  173069 cri.go:89] found id: ""
	I0203 11:51:48.234306  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.234317  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:48.234330  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:48.234347  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:48.246492  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:48.246517  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:48.310115  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:48.310151  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:48.310168  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:48.386999  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:48.387026  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:48.423031  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:48.423061  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:50.971751  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:50.984547  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:50.984616  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:51.021321  173069 cri.go:89] found id: ""
	I0203 11:51:51.021357  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.021367  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:51.021376  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:51.021435  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:51.052316  173069 cri.go:89] found id: ""
	I0203 11:51:51.052346  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.052365  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:51.052374  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:51.052439  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:51.095230  173069 cri.go:89] found id: ""
	I0203 11:51:51.095260  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.095273  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:51.095281  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:51.095344  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:50.026238  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Start
	I0203 11:51:50.026392  175844 main.go:141] libmachine: (newest-cni-586043) starting domain...
	I0203 11:51:50.026416  175844 main.go:141] libmachine: (newest-cni-586043) ensuring networks are active...
	I0203 11:51:50.027168  175844 main.go:141] libmachine: (newest-cni-586043) Ensuring network default is active
	I0203 11:51:50.027412  175844 main.go:141] libmachine: (newest-cni-586043) Ensuring network mk-newest-cni-586043 is active
	I0203 11:51:50.027811  175844 main.go:141] libmachine: (newest-cni-586043) getting domain XML...
	I0203 11:51:50.028591  175844 main.go:141] libmachine: (newest-cni-586043) creating domain...
	I0203 11:51:51.307305  175844 main.go:141] libmachine: (newest-cni-586043) waiting for IP...
	I0203 11:51:51.308386  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:51.308948  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:51.309071  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:51.308963  175880 retry.go:31] will retry after 231.852312ms: waiting for domain to come up
	I0203 11:51:51.542677  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:51.543280  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:51.543310  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:51.543240  175880 retry.go:31] will retry after 253.507055ms: waiting for domain to come up
	I0203 11:51:51.798941  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:51.799486  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:51.799509  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:51.799452  175880 retry.go:31] will retry after 481.304674ms: waiting for domain to come up
	I0203 11:51:52.282121  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:52.282587  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:52.282613  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:52.282573  175880 retry.go:31] will retry after 574.20795ms: waiting for domain to come up
	I0203 11:51:52.858249  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:52.858753  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:52.858797  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:52.858730  175880 retry.go:31] will retry after 479.45061ms: waiting for domain to come up
	I0203 11:51:53.339378  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:53.339968  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:53.340048  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:53.339937  175880 retry.go:31] will retry after 611.732312ms: waiting for domain to come up
	I0203 11:51:53.953770  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:53.954271  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:53.954309  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:53.954225  175880 retry.go:31] will retry after 1.020753974s: waiting for domain to come up
	I0203 11:51:51.127525  173069 cri.go:89] found id: ""
	I0203 11:51:51.127555  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.127564  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:51.127571  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:51.127642  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:51.174651  173069 cri.go:89] found id: ""
	I0203 11:51:51.174683  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.174694  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:51.174700  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:51.174761  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:51.208470  173069 cri.go:89] found id: ""
	I0203 11:51:51.208498  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.208510  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:51.208518  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:51.208585  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:51.242996  173069 cri.go:89] found id: ""
	I0203 11:51:51.243022  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.243031  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:51.243042  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:51.243103  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:51.277561  173069 cri.go:89] found id: ""
	I0203 11:51:51.277584  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.277592  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:51.277602  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:51.277613  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:51.316285  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:51.316313  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:51.378564  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:51.378598  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:51.391948  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:51.391974  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:51.459101  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:51.459127  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:51.459140  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:54.041961  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:54.057395  173069 kubeadm.go:597] duration metric: took 4m4.242570395s to restartPrimaryControlPlane
	W0203 11:51:54.057514  173069 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0203 11:51:54.057545  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0203 11:51:54.515481  173069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:51:54.529356  173069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:51:54.538455  173069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:51:54.547140  173069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:51:54.547165  173069 kubeadm.go:157] found existing configuration files:
	
	I0203 11:51:54.547215  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:51:54.555393  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:51:54.555454  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:51:54.564221  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:51:54.572805  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:51:54.572854  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:51:54.581348  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:51:54.589519  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:51:54.589584  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:51:54.598204  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:51:54.606299  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:51:54.606354  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:51:54.614879  173069 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:51:54.681507  173069 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0203 11:51:54.681579  173069 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:51:54.833975  173069 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:51:54.834115  173069 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:51:54.834236  173069 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 11:51:55.015734  173069 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:51:55.017800  173069 out.go:235]   - Generating certificates and keys ...
	I0203 11:51:55.017908  173069 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:51:55.018029  173069 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:51:55.018147  173069 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 11:51:55.018236  173069 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0203 11:51:55.018336  173069 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 11:51:55.018420  173069 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0203 11:51:55.018509  173069 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0203 11:51:55.018605  173069 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0203 11:51:55.018770  173069 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 11:51:55.019144  173069 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 11:51:55.019209  173069 kubeadm.go:310] [certs] Using the existing "sa" key
	I0203 11:51:55.019307  173069 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:51:55.202633  173069 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:51:55.377699  173069 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:51:55.476193  173069 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:51:55.684690  173069 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:51:55.706297  173069 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:51:55.707243  173069 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:51:55.707310  173069 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:51:55.857226  173069 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:51:55.859128  173069 out.go:235]   - Booting up control plane ...
	I0203 11:51:55.859247  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:51:55.863942  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:51:55.865838  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:51:55.867142  173069 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:51:55.871067  173069 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 11:51:54.976708  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:54.977205  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:54.977268  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:54.977201  175880 retry.go:31] will retry after 1.395111029s: waiting for domain to come up
	I0203 11:51:56.374208  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:56.374601  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:56.374630  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:56.374585  175880 retry.go:31] will retry after 1.224641048s: waiting for domain to come up
	I0203 11:51:57.600995  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:57.601460  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:57.601486  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:57.601423  175880 retry.go:31] will retry after 2.153368032s: waiting for domain to come up
	I0203 11:51:59.757799  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:59.758428  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:59.758462  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:59.758378  175880 retry.go:31] will retry after 1.84005517s: waiting for domain to come up
	I0203 11:52:01.600091  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:01.600507  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:52:01.600557  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:52:01.600500  175880 retry.go:31] will retry after 3.236577417s: waiting for domain to come up
	I0203 11:52:04.840924  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:04.841396  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:52:04.841418  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:52:04.841372  175880 retry.go:31] will retry after 4.182823067s: waiting for domain to come up
	I0203 11:52:09.028277  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.028811  175844 main.go:141] libmachine: (newest-cni-586043) found domain IP: 192.168.72.151
	I0203 11:52:09.028840  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has current primary IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.028850  175844 main.go:141] libmachine: (newest-cni-586043) reserving static IP address...
	I0203 11:52:09.029304  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "newest-cni-586043", mac: "52:54:00:47:62:16", ip: "192.168.72.151"} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.029356  175844 main.go:141] libmachine: (newest-cni-586043) DBG | skip adding static IP to network mk-newest-cni-586043 - found existing host DHCP lease matching {name: "newest-cni-586043", mac: "52:54:00:47:62:16", ip: "192.168.72.151"}
	I0203 11:52:09.029375  175844 main.go:141] libmachine: (newest-cni-586043) reserved static IP address 192.168.72.151 for domain newest-cni-586043
	I0203 11:52:09.029391  175844 main.go:141] libmachine: (newest-cni-586043) waiting for SSH...
	I0203 11:52:09.029402  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Getting to WaitForSSH function...
	I0203 11:52:09.031306  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.031561  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.031584  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.031691  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Using SSH client type: external
	I0203 11:52:09.031718  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Using SSH private key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa (-rw-------)
	I0203 11:52:09.031754  175844 main.go:141] libmachine: (newest-cni-586043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0203 11:52:09.031768  175844 main.go:141] libmachine: (newest-cni-586043) DBG | About to run SSH command:
	I0203 11:52:09.031783  175844 main.go:141] libmachine: (newest-cni-586043) DBG | exit 0
	I0203 11:52:09.158021  175844 main.go:141] libmachine: (newest-cni-586043) DBG | SSH cmd err, output: <nil>: 
	I0203 11:52:09.158333  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetConfigRaw
	I0203 11:52:09.158996  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetIP
	I0203 11:52:09.161428  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.161811  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.161843  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.162127  175844 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/config.json ...
	I0203 11:52:09.162368  175844 machine.go:93] provisionDockerMachine start ...
	I0203 11:52:09.162395  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:09.162624  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.164802  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.165087  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.165126  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.165207  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.165381  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.165547  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.165670  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.165859  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:09.166136  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:09.166151  175844 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:52:09.274234  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 11:52:09.274266  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetMachineName
	I0203 11:52:09.274537  175844 buildroot.go:166] provisioning hostname "newest-cni-586043"
	I0203 11:52:09.274559  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetMachineName
	I0203 11:52:09.274783  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.277599  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.277966  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.278013  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.278316  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.278559  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.278755  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.278915  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.279070  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:09.279267  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:09.279283  175844 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-586043 && echo "newest-cni-586043" | sudo tee /etc/hostname
	I0203 11:52:09.400130  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-586043
	
	I0203 11:52:09.400158  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.402972  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.403283  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.403317  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.403501  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.403705  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.403913  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.404066  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.404242  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:09.404412  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:09.404436  175844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-586043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-586043/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-586043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:52:09.517890  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:52:09.517929  175844 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20354-109432/.minikube CaCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20354-109432/.minikube}
	I0203 11:52:09.517949  175844 buildroot.go:174] setting up certificates
	I0203 11:52:09.517959  175844 provision.go:84] configureAuth start
	I0203 11:52:09.517969  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetMachineName
	I0203 11:52:09.518273  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetIP
	I0203 11:52:09.520729  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.521035  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.521065  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.521252  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.523526  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.523855  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.523884  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.524049  175844 provision.go:143] copyHostCerts
	I0203 11:52:09.524110  175844 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem, removing ...
	I0203 11:52:09.524130  175844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem
	I0203 11:52:09.524200  175844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem (1078 bytes)
	I0203 11:52:09.524288  175844 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem, removing ...
	I0203 11:52:09.524296  175844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem
	I0203 11:52:09.524320  175844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem (1123 bytes)
	I0203 11:52:09.524376  175844 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem, removing ...
	I0203 11:52:09.524383  175844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem
	I0203 11:52:09.524402  175844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem (1679 bytes)
	I0203 11:52:09.524452  175844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem org=jenkins.newest-cni-586043 san=[127.0.0.1 192.168.72.151 localhost minikube newest-cni-586043]
	I0203 11:52:09.790829  175844 provision.go:177] copyRemoteCerts
	I0203 11:52:09.790896  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:52:09.790920  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.793962  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.794408  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.794440  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.794595  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.794829  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.794997  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.795367  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:09.881518  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 11:52:09.906901  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0203 11:52:09.931388  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 11:52:09.953430  175844 provision.go:87] duration metric: took 435.447216ms to configureAuth
	I0203 11:52:09.953471  175844 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:52:09.953676  175844 config.go:182] Loaded profile config "newest-cni-586043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:52:09.953755  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.956581  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.956917  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.956942  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.957055  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.957227  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.957362  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.957584  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.957788  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:09.958041  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:09.958063  175844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0203 11:52:10.176560  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0203 11:52:10.176589  175844 machine.go:96] duration metric: took 1.014204647s to provisionDockerMachine
	I0203 11:52:10.176602  175844 start.go:293] postStartSetup for "newest-cni-586043" (driver="kvm2")
	I0203 11:52:10.176613  175844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:52:10.176631  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.176961  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:52:10.176996  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:10.179737  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.180134  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.180164  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.180316  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:10.180547  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.180744  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:10.180895  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:10.266380  175844 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:52:10.270497  175844 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:52:10.270522  175844 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/addons for local assets ...
	I0203 11:52:10.270598  175844 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/files for local assets ...
	I0203 11:52:10.270682  175844 filesync.go:149] local asset: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem -> 1166062.pem in /etc/ssl/certs
	I0203 11:52:10.270792  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:52:10.281329  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:52:10.305215  175844 start.go:296] duration metric: took 128.597013ms for postStartSetup
	I0203 11:52:10.305259  175844 fix.go:56] duration metric: took 20.301585236s for fixHost
	I0203 11:52:10.305281  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:10.308015  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.308340  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.308363  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.308576  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:10.308776  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.308933  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.309106  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:10.309269  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:10.309477  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:10.309488  175844 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:52:10.418635  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738583530.394973154
	
	I0203 11:52:10.418670  175844 fix.go:216] guest clock: 1738583530.394973154
	I0203 11:52:10.418681  175844 fix.go:229] Guest: 2025-02-03 11:52:10.394973154 +0000 UTC Remote: 2025-02-03 11:52:10.305263637 +0000 UTC m=+20.446505021 (delta=89.709517ms)
	I0203 11:52:10.418749  175844 fix.go:200] guest clock delta is within tolerance: 89.709517ms
	I0203 11:52:10.418762  175844 start.go:83] releasing machines lock for "newest-cni-586043", held for 20.41511092s
	I0203 11:52:10.418798  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.419078  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetIP
	I0203 11:52:10.421707  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.422072  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.422103  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.422248  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.422797  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.422964  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.423053  175844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 11:52:10.423102  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:10.423134  175844 ssh_runner.go:195] Run: cat /version.json
	I0203 11:52:10.423157  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:10.425822  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.425947  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.426182  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.426204  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.426244  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.426265  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.426381  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:10.426506  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:10.426588  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.426696  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.426767  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:10.426837  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:10.426898  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:10.426931  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:10.534682  175844 ssh_runner.go:195] Run: systemctl --version
	I0203 11:52:10.540384  175844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0203 11:52:10.689697  175844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 11:52:10.695210  175844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:52:10.695274  175844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:52:10.710890  175844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:52:10.710920  175844 start.go:495] detecting cgroup driver to use...
	I0203 11:52:10.710996  175844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:52:10.726494  175844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:52:10.739926  175844 docker.go:217] disabling cri-docker service (if available) ...
	I0203 11:52:10.739983  175844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0203 11:52:10.753560  175844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0203 11:52:10.767625  175844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0203 11:52:10.883158  175844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0203 11:52:11.040505  175844 docker.go:233] disabling docker service ...
	I0203 11:52:11.040580  175844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0203 11:52:11.054421  175844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0203 11:52:11.067456  175844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0203 11:52:11.197256  175844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0203 11:52:11.326650  175844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0203 11:52:11.347953  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:52:11.365712  175844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0203 11:52:11.365783  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.375704  175844 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0203 11:52:11.375785  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.385498  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.395211  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.404733  175844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:52:11.414432  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.424057  175844 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.439837  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.449629  175844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:52:11.458405  175844 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:52:11.458478  175844 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:52:11.470212  175844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:52:11.480208  175844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:52:11.603955  175844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0203 11:52:11.686456  175844 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0203 11:52:11.686529  175844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0203 11:52:11.690882  175844 start.go:563] Will wait 60s for crictl version
	I0203 11:52:11.690934  175844 ssh_runner.go:195] Run: which crictl
	I0203 11:52:11.694501  175844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:52:11.731809  175844 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0203 11:52:11.731905  175844 ssh_runner.go:195] Run: crio --version
	I0203 11:52:11.761777  175844 ssh_runner.go:195] Run: crio --version
	I0203 11:52:11.793653  175844 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0203 11:52:11.795062  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetIP
	I0203 11:52:11.797753  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:11.798098  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:11.798125  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:11.798347  175844 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0203 11:52:11.802272  175844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:52:11.815917  175844 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0203 11:52:11.817298  175844 kubeadm.go:883] updating cluster {Name:newest-cni-586043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 11:52:11.817452  175844 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:52:11.817531  175844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:52:11.850957  175844 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0203 11:52:11.851042  175844 ssh_runner.go:195] Run: which lz4
	I0203 11:52:11.854770  175844 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0203 11:52:11.858671  175844 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 11:52:11.858703  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0203 11:52:13.043197  175844 crio.go:462] duration metric: took 1.188462639s to copy over tarball
	I0203 11:52:13.043293  175844 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0203 11:52:15.160894  175844 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.117557833s)
	I0203 11:52:15.160939  175844 crio.go:469] duration metric: took 2.117706974s to extract the tarball
	I0203 11:52:15.160949  175844 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0203 11:52:15.198286  175844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:52:15.239287  175844 crio.go:514] all images are preloaded for cri-o runtime.
	I0203 11:52:15.239321  175844 cache_images.go:84] Images are preloaded, skipping loading
	I0203 11:52:15.239330  175844 kubeadm.go:934] updating node { 192.168.72.151 8443 v1.32.1 crio true true} ...
	I0203 11:52:15.239461  175844 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-586043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:52:15.239619  175844 ssh_runner.go:195] Run: crio config
	I0203 11:52:15.287775  175844 cni.go:84] Creating CNI manager for ""
	I0203 11:52:15.287800  175844 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:52:15.287810  175844 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0203 11:52:15.287833  175844 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-586043 NodeName:newest-cni-586043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 11:52:15.287959  175844 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-586043"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 11:52:15.288022  175844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 11:52:15.297463  175844 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 11:52:15.297537  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 11:52:15.306437  175844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0203 11:52:15.321420  175844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:52:15.336615  175844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0203 11:52:15.352231  175844 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I0203 11:52:15.355798  175844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:52:15.367061  175844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:52:15.495735  175844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:52:15.512622  175844 certs.go:68] Setting up /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043 for IP: 192.168.72.151
	I0203 11:52:15.512651  175844 certs.go:194] generating shared ca certs ...
	I0203 11:52:15.512674  175844 certs.go:226] acquiring lock for ca certs: {Name:mkceafe81f89678b7cbc2a7f6faab4e784fcb207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:52:15.512839  175844 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key
	I0203 11:52:15.512893  175844 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key
	I0203 11:52:15.512907  175844 certs.go:256] generating profile certs ...
	I0203 11:52:15.513010  175844 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/client.key
	I0203 11:52:15.513093  175844 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/apiserver.key.63294795
	I0203 11:52:15.513150  175844 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/proxy-client.key
	I0203 11:52:15.513307  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem (1338 bytes)
	W0203 11:52:15.513348  175844 certs.go:480] ignoring /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606_empty.pem, impossibly tiny 0 bytes
	I0203 11:52:15.513370  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 11:52:15.513458  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem (1078 bytes)
	I0203 11:52:15.513498  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem (1123 bytes)
	I0203 11:52:15.513536  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem (1679 bytes)
	I0203 11:52:15.513590  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:52:15.514532  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:52:15.549975  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 11:52:15.586087  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:52:15.616774  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 11:52:15.650861  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0203 11:52:15.677800  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 11:52:15.702344  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:52:15.724326  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 11:52:15.746037  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /usr/share/ca-certificates/1166062.pem (1708 bytes)
	I0203 11:52:15.768136  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:52:15.790221  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem --> /usr/share/ca-certificates/116606.pem (1338 bytes)
	I0203 11:52:15.812120  175844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:52:15.828614  175844 ssh_runner.go:195] Run: openssl version
	I0203 11:52:15.834594  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116606.pem && ln -fs /usr/share/ca-certificates/116606.pem /etc/ssl/certs/116606.pem"
	I0203 11:52:15.845364  175844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116606.pem
	I0203 11:52:15.849706  175844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:41 /usr/share/ca-certificates/116606.pem
	I0203 11:52:15.849770  175844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116606.pem
	I0203 11:52:15.855545  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116606.pem /etc/ssl/certs/51391683.0"
	I0203 11:52:15.866161  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1166062.pem && ln -fs /usr/share/ca-certificates/1166062.pem /etc/ssl/certs/1166062.pem"
	I0203 11:52:15.876957  175844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1166062.pem
	I0203 11:52:15.881522  175844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:41 /usr/share/ca-certificates/1166062.pem
	I0203 11:52:15.881602  175844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1166062.pem
	I0203 11:52:15.887046  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1166062.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:52:15.897606  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:52:15.908452  175844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:52:15.912883  175844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:52:15.912951  175844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:52:15.918459  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:52:15.928802  175844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:52:15.933142  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0203 11:52:15.938806  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0203 11:52:15.944291  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0203 11:52:15.949834  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0203 11:52:15.955213  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0203 11:52:15.960551  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0203 11:52:15.965905  175844 kubeadm.go:392] StartCluster: {Name:newest-cni-586043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:52:15.965992  175844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 11:52:15.966055  175844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:52:16.005648  175844 cri.go:89] found id: ""
	I0203 11:52:16.005716  175844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 11:52:16.015599  175844 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0203 11:52:16.015623  175844 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0203 11:52:16.015672  175844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0203 11:52:16.024927  175844 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0203 11:52:16.025481  175844 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-586043" does not appear in /home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:52:16.025667  175844 kubeconfig.go:62] /home/jenkins/minikube-integration/20354-109432/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-586043" cluster setting kubeconfig missing "newest-cni-586043" context setting]
	I0203 11:52:16.025988  175844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/kubeconfig: {Name:mkcb7c4c45c6b828504faaa9fea59b0bb0855286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:52:16.028966  175844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0203 11:52:16.038295  175844 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.151
	I0203 11:52:16.038346  175844 kubeadm.go:1160] stopping kube-system containers ...
	I0203 11:52:16.038363  175844 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0203 11:52:16.038415  175844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:52:16.076934  175844 cri.go:89] found id: ""
	I0203 11:52:16.077021  175844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 11:52:16.093360  175844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:52:16.102923  175844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:52:16.102952  175844 kubeadm.go:157] found existing configuration files:
	
	I0203 11:52:16.103002  175844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:52:16.111845  175844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:52:16.111910  175844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:52:16.121141  175844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:52:16.129822  175844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:52:16.129886  175844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:52:16.138692  175844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:52:16.147297  175844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:52:16.147368  175844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:52:16.157136  175844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:52:16.166841  175844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:52:16.166927  175844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:52:16.176387  175844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:52:16.185620  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:16.308286  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:17.428161  175844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.119820981s)
	I0203 11:52:17.428197  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:17.617442  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:17.710553  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:17.786236  175844 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:52:17.786327  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:18.287335  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:18.787276  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:19.287247  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:19.787249  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:19.832243  175844 api_server.go:72] duration metric: took 2.046005993s to wait for apiserver process to appear ...
	I0203 11:52:19.832296  175844 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:52:19.832324  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:19.832848  175844 api_server.go:269] stopped: https://192.168.72.151:8443/healthz: Get "https://192.168.72.151:8443/healthz": dial tcp 192.168.72.151:8443: connect: connection refused
	I0203 11:52:20.333113  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:22.593112  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 11:52:22.593149  175844 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 11:52:22.593168  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:22.615767  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 11:52:22.615799  175844 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 11:52:22.833274  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:22.838649  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 11:52:22.838680  175844 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 11:52:23.333376  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:23.338020  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 11:52:23.338047  175844 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 11:52:23.832467  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:23.836670  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0203 11:52:23.842741  175844 api_server.go:141] control plane version: v1.32.1
	I0203 11:52:23.842765  175844 api_server.go:131] duration metric: took 4.010461718s to wait for apiserver health ...
	I0203 11:52:23.842774  175844 cni.go:84] Creating CNI manager for ""
	I0203 11:52:23.842781  175844 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:52:23.844446  175844 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0203 11:52:23.845620  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 11:52:23.878399  175844 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0203 11:52:23.908467  175844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:52:23.916662  175844 system_pods.go:59] 8 kube-system pods found
	I0203 11:52:23.916703  175844 system_pods.go:61] "coredns-668d6bf9bc-cr5dw" [3d1b7381-6b42-4160-ba9d-6fddc2408174] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0203 11:52:23.916712  175844 system_pods.go:61] "etcd-newest-cni-586043" [16317397-91b4-459d-a91f-ce10dc19f0c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 11:52:23.916721  175844 system_pods.go:61] "kube-apiserver-newest-cni-586043" [79bd9928-7593-4eda-a9d6-fe3fe263c33a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 11:52:23.916727  175844 system_pods.go:61] "kube-controller-manager-newest-cni-586043" [8a00cc32-1347-42f0-b92b-ecf548236642] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 11:52:23.916733  175844 system_pods.go:61] "kube-proxy-c4bgm" [1a4f7c54-c137-401a-b004-2c93f251a646] Running
	I0203 11:52:23.916738  175844 system_pods.go:61] "kube-scheduler-newest-cni-586043" [e796e345-ebf7-4e6f-86d8-357cade7d05b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0203 11:52:23.916743  175844 system_pods.go:61] "metrics-server-f79f97bbb-w4v6r" [5c20a6e1-46c0-43fb-8057-90f4d2fc6d7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0203 11:52:23.916750  175844 system_pods.go:61] "storage-provisioner" [9720ea0d-98d4-4916-8e71-71a4e7a080d7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0203 11:52:23.916765  175844 system_pods.go:74] duration metric: took 8.272337ms to wait for pod list to return data ...
	I0203 11:52:23.916777  175844 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:52:23.920379  175844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:52:23.920405  175844 node_conditions.go:123] node cpu capacity is 2
	I0203 11:52:23.920416  175844 node_conditions.go:105] duration metric: took 3.634031ms to run NodePressure ...
	I0203 11:52:23.920432  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:24.231056  175844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 11:52:24.244201  175844 ops.go:34] apiserver oom_adj: -16
	I0203 11:52:24.244230  175844 kubeadm.go:597] duration metric: took 8.228599887s to restartPrimaryControlPlane
	I0203 11:52:24.244242  175844 kubeadm.go:394] duration metric: took 8.278345475s to StartCluster
	I0203 11:52:24.244264  175844 settings.go:142] acquiring lock: {Name:mk7f08542cc4ae303b222901a9d369cc0753d51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:52:24.244357  175844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:52:24.245400  175844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/kubeconfig: {Name:mkcb7c4c45c6b828504faaa9fea59b0bb0855286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:52:24.245703  175844 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 11:52:24.245788  175844 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0203 11:52:24.245905  175844 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-586043"
	I0203 11:52:24.245926  175844 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-586043"
	W0203 11:52:24.245937  175844 addons.go:247] addon storage-provisioner should already be in state true
	I0203 11:52:24.245931  175844 addons.go:69] Setting default-storageclass=true in profile "newest-cni-586043"
	I0203 11:52:24.245943  175844 addons.go:69] Setting metrics-server=true in profile "newest-cni-586043"
	I0203 11:52:24.245967  175844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-586043"
	I0203 11:52:24.245974  175844 addons.go:238] Setting addon metrics-server=true in "newest-cni-586043"
	I0203 11:52:24.245979  175844 addons.go:69] Setting dashboard=true in profile "newest-cni-586043"
	I0203 11:52:24.246021  175844 config.go:182] Loaded profile config "newest-cni-586043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:52:24.246026  175844 addons.go:238] Setting addon dashboard=true in "newest-cni-586043"
	W0203 11:52:24.246043  175844 addons.go:247] addon dashboard should already be in state true
	W0203 11:52:24.246056  175844 addons.go:247] addon metrics-server should already be in state true
	I0203 11:52:24.246096  175844 host.go:66] Checking if "newest-cni-586043" exists ...
	I0203 11:52:24.246136  175844 host.go:66] Checking if "newest-cni-586043" exists ...
	I0203 11:52:24.245971  175844 host.go:66] Checking if "newest-cni-586043" exists ...
	I0203 11:52:24.246487  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.246541  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.246546  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.246575  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.246627  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.246637  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.246653  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.246581  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.249614  175844 out.go:177] * Verifying Kubernetes components...
	I0203 11:52:24.250962  175844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:52:24.264646  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0203 11:52:24.265348  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.266025  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.266044  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.267072  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40539
	I0203 11:52:24.267076  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33521
	I0203 11:52:24.267102  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.267178  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0203 11:52:24.267615  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.267668  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.267618  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.267685  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.267825  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.268155  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.268178  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.268196  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.268242  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.268528  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.268584  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.269015  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.269099  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.269135  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.269171  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.269197  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.269701  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.270284  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.270327  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.273950  175844 addons.go:238] Setting addon default-storageclass=true in "newest-cni-586043"
	W0203 11:52:24.273978  175844 addons.go:247] addon default-storageclass should already be in state true
	I0203 11:52:24.274042  175844 host.go:66] Checking if "newest-cni-586043" exists ...
	I0203 11:52:24.274412  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.274462  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.289025  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39021
	I0203 11:52:24.289035  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44393
	I0203 11:52:24.289630  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.289674  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.290176  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.290206  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.290318  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.290332  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.290650  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.290878  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.290901  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.291096  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.293650  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:24.293656  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:24.295600  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I0203 11:52:24.296118  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.296790  175844 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0203 11:52:24.297034  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.297192  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.297586  175844 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0203 11:52:24.297621  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.298235  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.298334  175844 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0203 11:52:24.298350  175844 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0203 11:52:24.298376  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:24.298562  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41007
	I0203 11:52:24.298934  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.299547  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.299566  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.299918  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.300034  175844 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0203 11:52:24.300652  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.300697  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.301008  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0203 11:52:24.301026  175844 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0203 11:52:24.301046  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:24.301895  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:24.303400  175844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:52:24.304035  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.304500  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.304487  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:24.304531  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.304644  175844 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:52:24.304655  175844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 11:52:24.304667  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:24.305128  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:24.305143  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.305147  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:24.305387  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:24.305405  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:24.305612  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:24.305657  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:24.305776  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:24.305795  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:24.305907  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:24.307560  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.307791  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:24.307818  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.307956  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:24.308107  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:24.308228  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:24.308344  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:24.341888  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0203 11:52:24.342380  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.342876  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.342908  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.343222  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.343423  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.345056  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:24.345281  175844 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 11:52:24.345300  175844 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 11:52:24.345321  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:24.348062  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.348521  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:24.348557  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.348704  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:24.348944  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:24.349105  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:24.349239  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:24.423706  175844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:52:24.438635  175844 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:52:24.438729  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:24.451466  175844 api_server.go:72] duration metric: took 205.720039ms to wait for apiserver process to appear ...
	I0203 11:52:24.451494  175844 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:52:24.451512  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:24.455975  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0203 11:52:24.456944  175844 api_server.go:141] control plane version: v1.32.1
	I0203 11:52:24.456960  175844 api_server.go:131] duration metric: took 5.461365ms to wait for apiserver health ...
	I0203 11:52:24.456967  175844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:52:24.462575  175844 system_pods.go:59] 8 kube-system pods found
	I0203 11:52:24.462602  175844 system_pods.go:61] "coredns-668d6bf9bc-cr5dw" [3d1b7381-6b42-4160-ba9d-6fddc2408174] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0203 11:52:24.462609  175844 system_pods.go:61] "etcd-newest-cni-586043" [16317397-91b4-459d-a91f-ce10dc19f0c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 11:52:24.462618  175844 system_pods.go:61] "kube-apiserver-newest-cni-586043" [79bd9928-7593-4eda-a9d6-fe3fe263c33a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 11:52:24.462624  175844 system_pods.go:61] "kube-controller-manager-newest-cni-586043" [8a00cc32-1347-42f0-b92b-ecf548236642] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 11:52:24.462630  175844 system_pods.go:61] "kube-proxy-c4bgm" [1a4f7c54-c137-401a-b004-2c93f251a646] Running
	I0203 11:52:24.462636  175844 system_pods.go:61] "kube-scheduler-newest-cni-586043" [e796e345-ebf7-4e6f-86d8-357cade7d05b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0203 11:52:24.462640  175844 system_pods.go:61] "metrics-server-f79f97bbb-w4v6r" [5c20a6e1-46c0-43fb-8057-90f4d2fc6d7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0203 11:52:24.462646  175844 system_pods.go:61] "storage-provisioner" [9720ea0d-98d4-4916-8e71-71a4e7a080d7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0203 11:52:24.462651  175844 system_pods.go:74] duration metric: took 5.679512ms to wait for pod list to return data ...
	I0203 11:52:24.462661  175844 default_sa.go:34] waiting for default service account to be created ...
	I0203 11:52:24.464973  175844 default_sa.go:45] found service account: "default"
	I0203 11:52:24.464991  175844 default_sa.go:55] duration metric: took 2.324944ms for default service account to be created ...
	I0203 11:52:24.465002  175844 kubeadm.go:582] duration metric: took 219.259944ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0203 11:52:24.465020  175844 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:52:24.467037  175844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:52:24.467054  175844 node_conditions.go:123] node cpu capacity is 2
	I0203 11:52:24.467064  175844 node_conditions.go:105] duration metric: took 2.039421ms to run NodePressure ...
	I0203 11:52:24.467074  175844 start.go:241] waiting for startup goroutines ...
	I0203 11:52:24.510558  175844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 11:52:24.518267  175844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0203 11:52:24.518302  175844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0203 11:52:24.539840  175844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0203 11:52:24.539866  175844 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0203 11:52:24.569697  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0203 11:52:24.569727  175844 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0203 11:52:24.583824  175844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:52:24.598897  175844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0203 11:52:24.598921  175844 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0203 11:52:24.610164  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0203 11:52:24.610188  175844 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0203 11:52:24.677539  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0203 11:52:24.677569  175844 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0203 11:52:24.700565  175844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0203 11:52:24.799702  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0203 11:52:24.799733  175844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0203 11:52:24.916536  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0203 11:52:24.916568  175844 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0203 11:52:25.033797  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0203 11:52:25.033826  175844 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0203 11:52:25.062256  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:25.062298  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:25.062596  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:25.062614  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:25.062622  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:25.062629  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:25.062867  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:25.062887  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:25.091731  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:25.091759  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:25.092053  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:25.092073  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:25.092088  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:25.130272  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0203 11:52:25.130306  175844 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0203 11:52:25.184756  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0203 11:52:25.184789  175844 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0203 11:52:25.245270  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0203 11:52:25.245304  175844 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0203 11:52:25.294755  175844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0203 11:52:26.053809  175844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.469944422s)
	I0203 11:52:26.053870  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.053884  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.054221  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:26.054266  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.054293  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.054313  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.054324  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.054556  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:26.054575  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.054586  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.087724  175844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.387111638s)
	I0203 11:52:26.087790  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.087808  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.088123  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.088159  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.088184  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.088200  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:26.088206  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.088502  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.088531  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.088545  175844 addons.go:479] Verifying addon metrics-server=true in "newest-cni-586043"
	I0203 11:52:26.531533  175844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.236731103s)
	I0203 11:52:26.531586  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.531597  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.532020  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:26.532039  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.532055  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.532069  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.532081  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.532328  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.532345  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.533858  175844 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-586043 addons enable metrics-server
	
	I0203 11:52:26.535168  175844 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0203 11:52:26.536259  175844 addons.go:514] duration metric: took 2.290478763s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0203 11:52:26.536307  175844 start.go:246] waiting for cluster config update ...
	I0203 11:52:26.536322  175844 start.go:255] writing updated cluster config ...
	I0203 11:52:26.536548  175844 ssh_runner.go:195] Run: rm -f paused
	I0203 11:52:26.583516  175844 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0203 11:52:26.585070  175844 out.go:177] * Done! kubectl is now configured to use "newest-cni-586043" cluster and "default" namespace by default
	I0203 11:52:35.872135  173069 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0203 11:52:35.872966  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:52:35.873172  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:52:40.873720  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:52:40.873968  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:52:50.874520  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:52:50.874761  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:53:10.875767  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:53:10.876032  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:53:50.878348  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:53:50.878572  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:53:50.878585  173069 kubeadm.go:310] 
	I0203 11:53:50.878677  173069 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0203 11:53:50.878746  173069 kubeadm.go:310] 		timed out waiting for the condition
	I0203 11:53:50.878756  173069 kubeadm.go:310] 
	I0203 11:53:50.878805  173069 kubeadm.go:310] 	This error is likely caused by:
	I0203 11:53:50.878848  173069 kubeadm.go:310] 		- The kubelet is not running
	I0203 11:53:50.878993  173069 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 11:53:50.879004  173069 kubeadm.go:310] 
	I0203 11:53:50.879145  173069 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 11:53:50.879192  173069 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0203 11:53:50.879235  173069 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0203 11:53:50.879245  173069 kubeadm.go:310] 
	I0203 11:53:50.879390  173069 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 11:53:50.879507  173069 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 11:53:50.879517  173069 kubeadm.go:310] 
	I0203 11:53:50.879660  173069 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0203 11:53:50.879782  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0203 11:53:50.879904  173069 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0203 11:53:50.880019  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0203 11:53:50.880033  173069 kubeadm.go:310] 
	I0203 11:53:50.880322  173069 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:53:50.880397  173069 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 11:53:50.880465  173069 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0203 11:53:50.880620  173069 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0203 11:53:50.880666  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0203 11:53:56.208593  173069 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.327900088s)
	I0203 11:53:56.208687  173069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:53:56.222067  173069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:53:56.231274  173069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:53:56.231296  173069 kubeadm.go:157] found existing configuration files:
	
	I0203 11:53:56.231344  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:53:56.240522  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:53:56.240587  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:53:56.249755  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:53:56.258586  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:53:56.258645  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:53:56.267974  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:53:56.276669  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:53:56.276720  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:53:56.285661  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:53:56.294673  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:53:56.294734  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:53:56.303819  173069 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:53:56.510714  173069 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:55:52.911681  173069 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 11:55:52.911777  173069 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0203 11:55:52.913157  173069 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0203 11:55:52.913224  173069 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:55:52.913299  173069 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:55:52.913463  173069 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:55:52.913598  173069 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 11:55:52.913672  173069 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:55:52.915764  173069 out.go:235]   - Generating certificates and keys ...
	I0203 11:55:52.915857  173069 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:55:52.915908  173069 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:55:52.915975  173069 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 11:55:52.916023  173069 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0203 11:55:52.916077  173069 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 11:55:52.916150  173069 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0203 11:55:52.916233  173069 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0203 11:55:52.916309  173069 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0203 11:55:52.916424  173069 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 11:55:52.916508  173069 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 11:55:52.916542  173069 kubeadm.go:310] [certs] Using the existing "sa" key
	I0203 11:55:52.916589  173069 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:55:52.916635  173069 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:55:52.916682  173069 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:55:52.916747  173069 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:55:52.916798  173069 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:55:52.916898  173069 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:55:52.916991  173069 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:55:52.917027  173069 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:55:52.917082  173069 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:55:52.918947  173069 out.go:235]   - Booting up control plane ...
	I0203 11:55:52.919052  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:55:52.919135  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:55:52.919213  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:55:52.919298  173069 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:55:52.919440  173069 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 11:55:52.919509  173069 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0203 11:55:52.919578  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.919738  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.919799  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.919950  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920007  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.920158  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920230  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.920452  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920558  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.920806  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920815  173069 kubeadm.go:310] 
	I0203 11:55:52.920849  173069 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0203 11:55:52.920884  173069 kubeadm.go:310] 		timed out waiting for the condition
	I0203 11:55:52.920891  173069 kubeadm.go:310] 
	I0203 11:55:52.920924  173069 kubeadm.go:310] 	This error is likely caused by:
	I0203 11:55:52.920954  173069 kubeadm.go:310] 		- The kubelet is not running
	I0203 11:55:52.921051  173069 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 11:55:52.921066  173069 kubeadm.go:310] 
	I0203 11:55:52.921160  173069 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 11:55:52.921199  173069 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0203 11:55:52.921228  173069 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0203 11:55:52.921235  173069 kubeadm.go:310] 
	I0203 11:55:52.921355  173069 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 11:55:52.921465  173069 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 11:55:52.921476  173069 kubeadm.go:310] 
	I0203 11:55:52.921595  173069 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0203 11:55:52.921666  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0203 11:55:52.921725  173069 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0203 11:55:52.921781  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0203 11:55:52.921820  173069 kubeadm.go:310] 
	I0203 11:55:52.921866  173069 kubeadm.go:394] duration metric: took 8m3.159723737s to StartCluster
	I0203 11:55:52.921917  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:55:52.921979  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:55:52.965327  173069 cri.go:89] found id: ""
	I0203 11:55:52.965360  173069 logs.go:282] 0 containers: []
	W0203 11:55:52.965370  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:55:52.965377  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:55:52.965429  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:55:52.999197  173069 cri.go:89] found id: ""
	I0203 11:55:52.999224  173069 logs.go:282] 0 containers: []
	W0203 11:55:52.999233  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:55:52.999239  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:55:52.999290  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:55:53.033201  173069 cri.go:89] found id: ""
	I0203 11:55:53.033231  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.033239  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:55:53.033245  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:55:53.033298  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:55:53.069227  173069 cri.go:89] found id: ""
	I0203 11:55:53.069262  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.069274  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:55:53.069282  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:55:53.069361  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:55:53.102418  173069 cri.go:89] found id: ""
	I0203 11:55:53.102448  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.102460  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:55:53.102467  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:55:53.102595  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:55:53.134815  173069 cri.go:89] found id: ""
	I0203 11:55:53.134846  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.134859  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:55:53.134865  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:55:53.134916  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:55:53.184017  173069 cri.go:89] found id: ""
	I0203 11:55:53.184063  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.184075  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:55:53.184086  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:55:53.184180  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:55:53.218584  173069 cri.go:89] found id: ""
	I0203 11:55:53.218620  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.218630  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:55:53.218642  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:55:53.218656  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:55:53.267577  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:55:53.267624  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:55:53.280882  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:55:53.280915  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:55:53.352344  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:55:53.352371  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:55:53.352385  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:55:53.451451  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:55:53.451495  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0203 11:55:53.488076  173069 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0203 11:55:53.488133  173069 out.go:270] * 
	W0203 11:55:53.488199  173069 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 11:55:53.488213  173069 out.go:270] * 
	W0203 11:55:53.489069  173069 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 11:55:53.492291  173069 out.go:201] 
	W0203 11:55:53.493552  173069 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 11:55:53.493606  173069 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0203 11:55:53.493647  173069 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0203 11:55:53.494859  173069 out.go:201] 
	
	
	==> CRI-O <==
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.058313482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738584296058275108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca7ab8a7-1409-4a2a-ab9b-c3f5f4d3d9b8 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.059102105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce37b54e-9e19-4f42-ae1e-3cc04962f4a2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.059183748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce37b54e-9e19-4f42-ae1e-3cc04962f4a2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.059241769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ce37b54e-9e19-4f42-ae1e-3cc04962f4a2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.089885895Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c05c8db-2dca-4e71-9698-acba074f77a8 name=/runtime.v1.RuntimeService/Version
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.089980191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c05c8db-2dca-4e71-9698-acba074f77a8 name=/runtime.v1.RuntimeService/Version
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.091010569Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7824974a-cf58-4e2c-8b15-54975b169763 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.091430420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738584296091408657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7824974a-cf58-4e2c-8b15-54975b169763 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.091996104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96b4bc80-af00-461d-8db9-1c36bb6955a5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.092042368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96b4bc80-af00-461d-8db9-1c36bb6955a5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.092076583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=96b4bc80-af00-461d-8db9-1c36bb6955a5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.123723069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21300304-8e01-4f5c-ab1b-1c6be0f74928 name=/runtime.v1.RuntimeService/Version
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.123840329Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21300304-8e01-4f5c-ab1b-1c6be0f74928 name=/runtime.v1.RuntimeService/Version
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.125067608Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6fb1ae34-db1f-4374-8c98-15a22a229fa3 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.125446195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738584296125426195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fb1ae34-db1f-4374-8c98-15a22a229fa3 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.125949345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5b48188-739f-4ecb-ba2a-1685839f8ff9 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.126035334Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5b48188-739f-4ecb-ba2a-1685839f8ff9 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.126079675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e5b48188-739f-4ecb-ba2a-1685839f8ff9 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.157989553Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2b7d043-8704-4be7-ae6d-3145646bc772 name=/runtime.v1.RuntimeService/Version
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.158063613Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2b7d043-8704-4be7-ae6d-3145646bc772 name=/runtime.v1.RuntimeService/Version
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.159403845Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78e81fe6-ea1c-46cf-b058-cd95e9f2a50c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.159877289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738584296159854543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78e81fe6-ea1c-46cf-b058-cd95e9f2a50c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.161489738Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1317030a-e75d-4c96-8bfd-a9c32ee105ec name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.161618232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1317030a-e75d-4c96-8bfd-a9c32ee105ec name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:04:56 old-k8s-version-517711 crio[637]: time="2025-02-03 12:04:56.161656779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1317030a-e75d-4c96-8bfd-a9c32ee105ec name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb 3 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054598] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038441] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.998629] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.169563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.572597] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.331149] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.081564] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074399] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.170591] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.142363] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.233678] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +6.346278] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.064365] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.290562] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[Feb 3 11:48] kauditd_printk_skb: 46 callbacks suppressed
	[Feb 3 11:51] systemd-fstab-generator[5068]: Ignoring "noauto" option for root device
	[Feb 3 11:53] systemd-fstab-generator[5353]: Ignoring "noauto" option for root device
	[  +0.065405] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:04:56 up 17 min,  0 users,  load average: 0.01, 0.03, 0.04
	Linux old-k8s-version-517711 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000b7d440, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b3f500, 0x24, 0x0, ...)
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]: net.(*Dialer).DialContext(0xc000149200, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b3f500, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc00099ddc0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b3f500, 0x24, 0x60, 0x7f9a000db7d8, 0x118, ...)
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]: net/http.(*Transport).dial(0xc000888000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b3f500, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]: net/http.(*Transport).dialConn(0xc000888000, 0x4f7fe00, 0xc000052030, 0x0, 0xc0005aa600, 0x5, 0xc000b3f500, 0x24, 0x0, 0xc000b7e480, ...)
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]: net/http.(*Transport).dialConnFor(0xc000888000, 0xc000af9340)
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]: created by net/http.(*Transport).queueForDial
	Feb 03 12:04:53 old-k8s-version-517711 kubelet[6521]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Feb 03 12:04:53 old-k8s-version-517711 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 03 12:04:53 old-k8s-version-517711 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 03 12:04:54 old-k8s-version-517711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Feb 03 12:04:54 old-k8s-version-517711 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 03 12:04:54 old-k8s-version-517711 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 03 12:04:54 old-k8s-version-517711 kubelet[6530]: I0203 12:04:54.377119    6530 server.go:416] Version: v1.20.0
	Feb 03 12:04:54 old-k8s-version-517711 kubelet[6530]: I0203 12:04:54.377612    6530 server.go:837] Client rotation is on, will bootstrap in background
	Feb 03 12:04:54 old-k8s-version-517711 kubelet[6530]: I0203 12:04:54.381939    6530 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 03 12:04:54 old-k8s-version-517711 kubelet[6530]: I0203 12:04:54.384973    6530 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Feb 03 12:04:54 old-k8s-version-517711 kubelet[6530]: W0203 12:04:54.385194    6530 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-517711 -n old-k8s-version-517711
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 2 (259.507392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-517711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (391.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:05:09.071591  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:05:42.117213  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:05:58.251389  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:06:41.615924  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:07:01.719725  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:08:09.948437  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/no-preload-085638/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:08:22.483717  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:08:49.262707  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/default-k8s-diff-port-138645/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:09:00.131003  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:09:08.317865  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:09:33.019190  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/no-preload-085638/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:09:49.329797  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:10:09.072503  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:10:12.327002  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/default-k8s-diff-port-138645/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:10:42.117301  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
E0203 12:10:58.251519  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.203:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.203:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-517711 -n old-k8s-version-517711
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 2 (238.72456ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-517711" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-517711 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-517711 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.001µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-517711 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 2 (220.415311ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-517711 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-691067 image list                          | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-691067                                  | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-691067                                  | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-691067                                  | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	| delete  | -p embed-certs-691067                                  | embed-certs-691067           | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	| start   | -p newest-cni-586043 --memory=2200 --alsologtostderr   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | no-preload-085638 image list                           | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-085638                                   | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-085638                                   | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-085638                                   | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	| delete  | -p no-preload-085638                                   | no-preload-085638            | jenkins | v1.35.0 | 03 Feb 25 11:50 UTC | 03 Feb 25 11:50 UTC |
	| image   | default-k8s-diff-port-138645                           | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | default-k8s-diff-port-138645                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | default-k8s-diff-port-138645                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | default-k8s-diff-port-138645                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-138645 | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | default-k8s-diff-port-138645                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-586043             | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-586043                  | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-586043 --memory=2200 --alsologtostderr   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:51 UTC | 03 Feb 25 11:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-586043 image list                           | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	| delete  | -p newest-cni-586043                                   | newest-cni-586043            | jenkins | v1.35.0 | 03 Feb 25 11:52 UTC | 03 Feb 25 11:52 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 11:51:49
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 11:51:49.897155  175844 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:51:49.897275  175844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:51:49.897287  175844 out.go:358] Setting ErrFile to fd 2...
	I0203 11:51:49.897291  175844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:51:49.897486  175844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:51:49.898057  175844 out.go:352] Setting JSON to false
	I0203 11:51:49.898943  175844 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9252,"bootTime":1738574258,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 11:51:49.899051  175844 start.go:139] virtualization: kvm guest
	I0203 11:51:49.901414  175844 out.go:177] * [newest-cni-586043] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 11:51:49.903016  175844 notify.go:220] Checking for updates...
	I0203 11:51:49.903024  175844 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:51:49.904418  175844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:51:49.905475  175844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:51:49.906695  175844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:51:49.907794  175844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 11:51:49.909017  175844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:51:49.910440  175844 config.go:182] Loaded profile config "newest-cni-586043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:51:49.910830  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:51:49.910906  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:51:49.925489  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
	I0203 11:51:49.925936  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:51:49.926599  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:51:49.926617  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:51:49.926982  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:51:49.927181  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:51:49.927443  175844 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:51:49.927733  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:51:49.927780  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:51:49.942754  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I0203 11:51:49.943278  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:51:49.943789  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:51:49.943810  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:51:49.944116  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:51:49.944333  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:51:49.982359  175844 out.go:177] * Using the kvm2 driver based on existing profile
	I0203 11:51:49.983564  175844 start.go:297] selected driver: kvm2
	I0203 11:51:49.983579  175844 start.go:901] validating driver "kvm2" against &{Name:newest-cni-586043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:51:49.983680  175844 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:51:49.984357  175844 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:51:49.984460  175844 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 11:51:49.999536  175844 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 11:51:49.999973  175844 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0203 11:51:50.000007  175844 cni.go:84] Creating CNI manager for ""
	I0203 11:51:50.000057  175844 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:51:50.000113  175844 start.go:340] cluster config:
	{Name:newest-cni-586043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:51:50.000234  175844 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:51:50.001824  175844 out.go:177] * Starting "newest-cni-586043" primary control-plane node in "newest-cni-586043" cluster
	I0203 11:51:50.003075  175844 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:51:50.003128  175844 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0203 11:51:50.003141  175844 cache.go:56] Caching tarball of preloaded images
	I0203 11:51:50.003229  175844 preload.go:172] Found /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0203 11:51:50.003240  175844 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0203 11:51:50.003363  175844 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/config.json ...
	I0203 11:51:50.003582  175844 start.go:360] acquireMachinesLock for newest-cni-586043: {Name:mk4d774b88f87fe0539ca3e30dd98aae8a4d5437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:51:50.003637  175844 start.go:364] duration metric: took 33.224µs to acquireMachinesLock for "newest-cni-586043"
	I0203 11:51:50.003664  175844 start.go:96] Skipping create...Using existing machine configuration
	I0203 11:51:50.003675  175844 fix.go:54] fixHost starting: 
	I0203 11:51:50.003993  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:51:50.004037  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:51:50.018719  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0203 11:51:50.020226  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:51:50.020848  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:51:50.020873  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:51:50.021243  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:51:50.021461  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:51:50.021601  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:51:50.023310  175844 fix.go:112] recreateIfNeeded on newest-cni-586043: state=Stopped err=<nil>
	I0203 11:51:50.023355  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	W0203 11:51:50.023508  175844 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 11:51:50.025216  175844 out.go:177] * Restarting existing kvm2 VM for "newest-cni-586043" ...
	I0203 11:51:47.958652  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:47.972404  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:47.972476  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:48.003924  173069 cri.go:89] found id: ""
	I0203 11:51:48.003952  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.003963  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:48.003972  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:48.004036  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:48.035462  173069 cri.go:89] found id: ""
	I0203 11:51:48.035495  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.035507  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:48.035516  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:48.035571  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:48.066226  173069 cri.go:89] found id: ""
	I0203 11:51:48.066255  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.066266  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:48.066274  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:48.066340  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:48.097119  173069 cri.go:89] found id: ""
	I0203 11:51:48.097150  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.097162  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:48.097170  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:48.097234  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:48.129010  173069 cri.go:89] found id: ""
	I0203 11:51:48.129049  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.129061  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:48.129069  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:48.129128  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:48.172322  173069 cri.go:89] found id: ""
	I0203 11:51:48.172355  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.172363  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:48.172371  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:48.172442  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:48.203549  173069 cri.go:89] found id: ""
	I0203 11:51:48.203579  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.203587  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:48.203594  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:48.203645  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:48.234281  173069 cri.go:89] found id: ""
	I0203 11:51:48.234306  173069 logs.go:282] 0 containers: []
	W0203 11:51:48.234317  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:48.234330  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:48.234347  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:48.246492  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:48.246517  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:48.310115  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:48.310151  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:48.310168  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:48.386999  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:48.387026  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:48.423031  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:48.423061  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:50.971751  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:50.984547  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:51:50.984616  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:51:51.021321  173069 cri.go:89] found id: ""
	I0203 11:51:51.021357  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.021367  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:51:51.021376  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:51:51.021435  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:51:51.052316  173069 cri.go:89] found id: ""
	I0203 11:51:51.052346  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.052365  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:51:51.052374  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:51:51.052439  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:51:51.095230  173069 cri.go:89] found id: ""
	I0203 11:51:51.095260  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.095273  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:51:51.095281  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:51:51.095344  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:51:50.026238  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Start
	I0203 11:51:50.026392  175844 main.go:141] libmachine: (newest-cni-586043) starting domain...
	I0203 11:51:50.026416  175844 main.go:141] libmachine: (newest-cni-586043) ensuring networks are active...
	I0203 11:51:50.027168  175844 main.go:141] libmachine: (newest-cni-586043) Ensuring network default is active
	I0203 11:51:50.027412  175844 main.go:141] libmachine: (newest-cni-586043) Ensuring network mk-newest-cni-586043 is active
	I0203 11:51:50.027811  175844 main.go:141] libmachine: (newest-cni-586043) getting domain XML...
	I0203 11:51:50.028591  175844 main.go:141] libmachine: (newest-cni-586043) creating domain...
	I0203 11:51:51.307305  175844 main.go:141] libmachine: (newest-cni-586043) waiting for IP...
	I0203 11:51:51.308386  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:51.308948  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:51.309071  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:51.308963  175880 retry.go:31] will retry after 231.852312ms: waiting for domain to come up
	I0203 11:51:51.542677  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:51.543280  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:51.543310  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:51.543240  175880 retry.go:31] will retry after 253.507055ms: waiting for domain to come up
	I0203 11:51:51.798941  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:51.799486  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:51.799509  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:51.799452  175880 retry.go:31] will retry after 481.304674ms: waiting for domain to come up
	I0203 11:51:52.282121  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:52.282587  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:52.282613  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:52.282573  175880 retry.go:31] will retry after 574.20795ms: waiting for domain to come up
	I0203 11:51:52.858249  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:52.858753  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:52.858797  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:52.858730  175880 retry.go:31] will retry after 479.45061ms: waiting for domain to come up
	I0203 11:51:53.339378  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:53.339968  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:53.340048  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:53.339937  175880 retry.go:31] will retry after 611.732312ms: waiting for domain to come up
	I0203 11:51:53.953770  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:53.954271  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:53.954309  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:53.954225  175880 retry.go:31] will retry after 1.020753974s: waiting for domain to come up
	I0203 11:51:51.127525  173069 cri.go:89] found id: ""
	I0203 11:51:51.127555  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.127564  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:51:51.127571  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:51:51.127642  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:51:51.174651  173069 cri.go:89] found id: ""
	I0203 11:51:51.174683  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.174694  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:51:51.174700  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:51:51.174761  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:51:51.208470  173069 cri.go:89] found id: ""
	I0203 11:51:51.208498  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.208510  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:51:51.208518  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:51:51.208585  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:51:51.242996  173069 cri.go:89] found id: ""
	I0203 11:51:51.243022  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.243031  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:51:51.243042  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:51:51.243103  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:51:51.277561  173069 cri.go:89] found id: ""
	I0203 11:51:51.277584  173069 logs.go:282] 0 containers: []
	W0203 11:51:51.277592  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:51:51.277602  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:51:51.277613  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 11:51:51.316285  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:51:51.316313  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:51:51.378564  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:51:51.378598  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:51:51.391948  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:51:51.391974  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:51:51.459101  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:51:51.459127  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:51:51.459140  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:51:54.041961  173069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:51:54.057395  173069 kubeadm.go:597] duration metric: took 4m4.242570395s to restartPrimaryControlPlane
	W0203 11:51:54.057514  173069 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0203 11:51:54.057545  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0203 11:51:54.515481  173069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:51:54.529356  173069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:51:54.538455  173069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:51:54.547140  173069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:51:54.547165  173069 kubeadm.go:157] found existing configuration files:
	
	I0203 11:51:54.547215  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:51:54.555393  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:51:54.555454  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:51:54.564221  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:51:54.572805  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:51:54.572854  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:51:54.581348  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:51:54.589519  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:51:54.589584  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:51:54.598204  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:51:54.606299  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:51:54.606354  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:51:54.614879  173069 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:51:54.681507  173069 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0203 11:51:54.681579  173069 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:51:54.833975  173069 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:51:54.834115  173069 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:51:54.834236  173069 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 11:51:55.015734  173069 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:51:55.017800  173069 out.go:235]   - Generating certificates and keys ...
	I0203 11:51:55.017908  173069 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:51:55.018029  173069 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:51:55.018147  173069 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 11:51:55.018236  173069 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0203 11:51:55.018336  173069 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 11:51:55.018420  173069 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0203 11:51:55.018509  173069 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0203 11:51:55.018605  173069 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0203 11:51:55.018770  173069 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 11:51:55.019144  173069 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 11:51:55.019209  173069 kubeadm.go:310] [certs] Using the existing "sa" key
	I0203 11:51:55.019307  173069 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:51:55.202633  173069 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:51:55.377699  173069 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:51:55.476193  173069 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:51:55.684690  173069 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:51:55.706297  173069 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:51:55.707243  173069 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:51:55.707310  173069 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:51:55.857226  173069 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:51:55.859128  173069 out.go:235]   - Booting up control plane ...
	I0203 11:51:55.859247  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:51:55.863942  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:51:55.865838  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:51:55.867142  173069 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:51:55.871067  173069 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 11:51:54.976708  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:54.977205  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:54.977268  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:54.977201  175880 retry.go:31] will retry after 1.395111029s: waiting for domain to come up
	I0203 11:51:56.374208  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:56.374601  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:56.374630  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:56.374585  175880 retry.go:31] will retry after 1.224641048s: waiting for domain to come up
	I0203 11:51:57.600995  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:57.601460  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:57.601486  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:57.601423  175880 retry.go:31] will retry after 2.153368032s: waiting for domain to come up
	I0203 11:51:59.757799  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:51:59.758428  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:51:59.758462  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:51:59.758378  175880 retry.go:31] will retry after 1.84005517s: waiting for domain to come up
	I0203 11:52:01.600091  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:01.600507  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:52:01.600557  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:52:01.600500  175880 retry.go:31] will retry after 3.236577417s: waiting for domain to come up
	I0203 11:52:04.840924  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:04.841396  175844 main.go:141] libmachine: (newest-cni-586043) DBG | unable to find current IP address of domain newest-cni-586043 in network mk-newest-cni-586043
	I0203 11:52:04.841418  175844 main.go:141] libmachine: (newest-cni-586043) DBG | I0203 11:52:04.841372  175880 retry.go:31] will retry after 4.182823067s: waiting for domain to come up
	I0203 11:52:09.028277  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.028811  175844 main.go:141] libmachine: (newest-cni-586043) found domain IP: 192.168.72.151
	I0203 11:52:09.028840  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has current primary IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.028850  175844 main.go:141] libmachine: (newest-cni-586043) reserving static IP address...
	I0203 11:52:09.029304  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "newest-cni-586043", mac: "52:54:00:47:62:16", ip: "192.168.72.151"} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.029356  175844 main.go:141] libmachine: (newest-cni-586043) DBG | skip adding static IP to network mk-newest-cni-586043 - found existing host DHCP lease matching {name: "newest-cni-586043", mac: "52:54:00:47:62:16", ip: "192.168.72.151"}
	I0203 11:52:09.029375  175844 main.go:141] libmachine: (newest-cni-586043) reserved static IP address 192.168.72.151 for domain newest-cni-586043
	I0203 11:52:09.029391  175844 main.go:141] libmachine: (newest-cni-586043) waiting for SSH...
	I0203 11:52:09.029402  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Getting to WaitForSSH function...
	I0203 11:52:09.031306  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.031561  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.031584  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.031691  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Using SSH client type: external
	I0203 11:52:09.031718  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Using SSH private key: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa (-rw-------)
	I0203 11:52:09.031754  175844 main.go:141] libmachine: (newest-cni-586043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0203 11:52:09.031768  175844 main.go:141] libmachine: (newest-cni-586043) DBG | About to run SSH command:
	I0203 11:52:09.031783  175844 main.go:141] libmachine: (newest-cni-586043) DBG | exit 0
	I0203 11:52:09.158021  175844 main.go:141] libmachine: (newest-cni-586043) DBG | SSH cmd err, output: <nil>: 
	I0203 11:52:09.158333  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetConfigRaw
	I0203 11:52:09.158996  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetIP
	I0203 11:52:09.161428  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.161811  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.161843  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.162127  175844 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/config.json ...
	I0203 11:52:09.162368  175844 machine.go:93] provisionDockerMachine start ...
	I0203 11:52:09.162395  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:09.162624  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.164802  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.165087  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.165126  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.165207  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.165381  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.165547  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.165670  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.165859  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:09.166136  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:09.166151  175844 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:52:09.274234  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 11:52:09.274266  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetMachineName
	I0203 11:52:09.274537  175844 buildroot.go:166] provisioning hostname "newest-cni-586043"
	I0203 11:52:09.274559  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetMachineName
	I0203 11:52:09.274783  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.277599  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.277966  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.278013  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.278316  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.278559  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.278755  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.278915  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.279070  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:09.279267  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:09.279283  175844 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-586043 && echo "newest-cni-586043" | sudo tee /etc/hostname
	I0203 11:52:09.400130  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-586043
	
	I0203 11:52:09.400158  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.402972  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.403283  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.403317  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.403501  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.403705  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.403913  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.404066  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.404242  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:09.404412  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:09.404436  175844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-586043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-586043/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-586043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:52:09.517890  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:52:09.517929  175844 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20354-109432/.minikube CaCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20354-109432/.minikube}
	I0203 11:52:09.517949  175844 buildroot.go:174] setting up certificates
	I0203 11:52:09.517959  175844 provision.go:84] configureAuth start
	I0203 11:52:09.517969  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetMachineName
	I0203 11:52:09.518273  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetIP
	I0203 11:52:09.520729  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.521035  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.521065  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.521252  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.523526  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.523855  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.523884  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.524049  175844 provision.go:143] copyHostCerts
	I0203 11:52:09.524110  175844 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem, removing ...
	I0203 11:52:09.524130  175844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem
	I0203 11:52:09.524200  175844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/ca.pem (1078 bytes)
	I0203 11:52:09.524288  175844 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem, removing ...
	I0203 11:52:09.524296  175844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem
	I0203 11:52:09.524320  175844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/cert.pem (1123 bytes)
	I0203 11:52:09.524376  175844 exec_runner.go:144] found /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem, removing ...
	I0203 11:52:09.524383  175844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem
	I0203 11:52:09.524402  175844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20354-109432/.minikube/key.pem (1679 bytes)
	I0203 11:52:09.524452  175844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem org=jenkins.newest-cni-586043 san=[127.0.0.1 192.168.72.151 localhost minikube newest-cni-586043]
	I0203 11:52:09.790829  175844 provision.go:177] copyRemoteCerts
	I0203 11:52:09.790896  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:52:09.790920  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.793962  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.794408  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.794440  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.794595  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.794829  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.794997  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.795367  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:09.881518  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 11:52:09.906901  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0203 11:52:09.931388  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 11:52:09.953430  175844 provision.go:87] duration metric: took 435.447216ms to configureAuth
	I0203 11:52:09.953471  175844 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:52:09.953676  175844 config.go:182] Loaded profile config "newest-cni-586043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:52:09.953755  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:09.956581  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.956917  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:09.956942  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:09.957055  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:09.957227  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.957362  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:09.957584  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:09.957788  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:09.958041  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:09.958063  175844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0203 11:52:10.176560  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0203 11:52:10.176589  175844 machine.go:96] duration metric: took 1.014204647s to provisionDockerMachine
	I0203 11:52:10.176602  175844 start.go:293] postStartSetup for "newest-cni-586043" (driver="kvm2")
	I0203 11:52:10.176613  175844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:52:10.176631  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.176961  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:52:10.176996  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:10.179737  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.180134  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.180164  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.180316  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:10.180547  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.180744  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:10.180895  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:10.266380  175844 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:52:10.270497  175844 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:52:10.270522  175844 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/addons for local assets ...
	I0203 11:52:10.270598  175844 filesync.go:126] Scanning /home/jenkins/minikube-integration/20354-109432/.minikube/files for local assets ...
	I0203 11:52:10.270682  175844 filesync.go:149] local asset: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem -> 1166062.pem in /etc/ssl/certs
	I0203 11:52:10.270792  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:52:10.281329  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:52:10.305215  175844 start.go:296] duration metric: took 128.597013ms for postStartSetup
	I0203 11:52:10.305259  175844 fix.go:56] duration metric: took 20.301585236s for fixHost
	I0203 11:52:10.305281  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:10.308015  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.308340  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.308363  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.308576  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:10.308776  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.308933  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.309106  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:10.309269  175844 main.go:141] libmachine: Using SSH client type: native
	I0203 11:52:10.309477  175844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0203 11:52:10.309488  175844 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:52:10.418635  175844 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738583530.394973154
	
	I0203 11:52:10.418670  175844 fix.go:216] guest clock: 1738583530.394973154
	I0203 11:52:10.418681  175844 fix.go:229] Guest: 2025-02-03 11:52:10.394973154 +0000 UTC Remote: 2025-02-03 11:52:10.305263637 +0000 UTC m=+20.446505021 (delta=89.709517ms)
	I0203 11:52:10.418749  175844 fix.go:200] guest clock delta is within tolerance: 89.709517ms
	I0203 11:52:10.418762  175844 start.go:83] releasing machines lock for "newest-cni-586043", held for 20.41511092s
	I0203 11:52:10.418798  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.419078  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetIP
	I0203 11:52:10.421707  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.422072  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.422103  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.422248  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.422797  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.422964  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:10.423053  175844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 11:52:10.423102  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:10.423134  175844 ssh_runner.go:195] Run: cat /version.json
	I0203 11:52:10.423157  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:10.425822  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.425947  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.426182  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.426204  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.426244  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:10.426265  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:10.426381  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:10.426506  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:10.426588  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.426696  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:10.426767  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:10.426837  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:10.426898  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:10.426931  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:10.534682  175844 ssh_runner.go:195] Run: systemctl --version
	I0203 11:52:10.540384  175844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0203 11:52:10.689697  175844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 11:52:10.695210  175844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:52:10.695274  175844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:52:10.710890  175844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:52:10.710920  175844 start.go:495] detecting cgroup driver to use...
	I0203 11:52:10.710996  175844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:52:10.726494  175844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:52:10.739926  175844 docker.go:217] disabling cri-docker service (if available) ...
	I0203 11:52:10.739983  175844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0203 11:52:10.753560  175844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0203 11:52:10.767625  175844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0203 11:52:10.883158  175844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0203 11:52:11.040505  175844 docker.go:233] disabling docker service ...
	I0203 11:52:11.040580  175844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0203 11:52:11.054421  175844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0203 11:52:11.067456  175844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0203 11:52:11.197256  175844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0203 11:52:11.326650  175844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0203 11:52:11.347953  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:52:11.365712  175844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0203 11:52:11.365783  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.375704  175844 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0203 11:52:11.375785  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.385498  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.395211  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.404733  175844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:52:11.414432  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.424057  175844 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.439837  175844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0203 11:52:11.449629  175844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:52:11.458405  175844 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:52:11.458478  175844 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:52:11.470212  175844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:52:11.480208  175844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:52:11.603955  175844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0203 11:52:11.686456  175844 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0203 11:52:11.686529  175844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0203 11:52:11.690882  175844 start.go:563] Will wait 60s for crictl version
	I0203 11:52:11.690934  175844 ssh_runner.go:195] Run: which crictl
	I0203 11:52:11.694501  175844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:52:11.731809  175844 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0203 11:52:11.731905  175844 ssh_runner.go:195] Run: crio --version
	I0203 11:52:11.761777  175844 ssh_runner.go:195] Run: crio --version
	I0203 11:52:11.793653  175844 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0203 11:52:11.795062  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetIP
	I0203 11:52:11.797753  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:11.798098  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:11.798125  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:11.798347  175844 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0203 11:52:11.802272  175844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:52:11.815917  175844 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0203 11:52:11.817298  175844 kubeadm.go:883] updating cluster {Name:newest-cni-586043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 11:52:11.817452  175844 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 11:52:11.817531  175844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:52:11.850957  175844 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0203 11:52:11.851042  175844 ssh_runner.go:195] Run: which lz4
	I0203 11:52:11.854770  175844 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0203 11:52:11.858671  175844 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 11:52:11.858703  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0203 11:52:13.043197  175844 crio.go:462] duration metric: took 1.188462639s to copy over tarball
	I0203 11:52:13.043293  175844 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0203 11:52:15.160894  175844 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.117557833s)
	I0203 11:52:15.160939  175844 crio.go:469] duration metric: took 2.117706974s to extract the tarball
	I0203 11:52:15.160949  175844 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0203 11:52:15.198286  175844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0203 11:52:15.239287  175844 crio.go:514] all images are preloaded for cri-o runtime.
	I0203 11:52:15.239321  175844 cache_images.go:84] Images are preloaded, skipping loading
	I0203 11:52:15.239330  175844 kubeadm.go:934] updating node { 192.168.72.151 8443 v1.32.1 crio true true} ...
	I0203 11:52:15.239461  175844 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-586043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:52:15.239619  175844 ssh_runner.go:195] Run: crio config
	I0203 11:52:15.287775  175844 cni.go:84] Creating CNI manager for ""
	I0203 11:52:15.287800  175844 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:52:15.287810  175844 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0203 11:52:15.287833  175844 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-586043 NodeName:newest-cni-586043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 11:52:15.287959  175844 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-586043"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 11:52:15.288022  175844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 11:52:15.297463  175844 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 11:52:15.297537  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 11:52:15.306437  175844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0203 11:52:15.321420  175844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:52:15.336615  175844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0203 11:52:15.352231  175844 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I0203 11:52:15.355798  175844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:52:15.367061  175844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:52:15.495735  175844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:52:15.512622  175844 certs.go:68] Setting up /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043 for IP: 192.168.72.151
	I0203 11:52:15.512651  175844 certs.go:194] generating shared ca certs ...
	I0203 11:52:15.512674  175844 certs.go:226] acquiring lock for ca certs: {Name:mkceafe81f89678b7cbc2a7f6faab4e784fcb207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:52:15.512839  175844 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key
	I0203 11:52:15.512893  175844 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key
	I0203 11:52:15.512907  175844 certs.go:256] generating profile certs ...
	I0203 11:52:15.513010  175844 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/client.key
	I0203 11:52:15.513093  175844 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/apiserver.key.63294795
	I0203 11:52:15.513150  175844 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/proxy-client.key
	I0203 11:52:15.513307  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem (1338 bytes)
	W0203 11:52:15.513348  175844 certs.go:480] ignoring /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606_empty.pem, impossibly tiny 0 bytes
	I0203 11:52:15.513370  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 11:52:15.513458  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/ca.pem (1078 bytes)
	I0203 11:52:15.513498  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/cert.pem (1123 bytes)
	I0203 11:52:15.513536  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/certs/key.pem (1679 bytes)
	I0203 11:52:15.513590  175844 certs.go:484] found cert: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem (1708 bytes)
	I0203 11:52:15.514532  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:52:15.549975  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 11:52:15.586087  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:52:15.616774  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 11:52:15.650861  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0203 11:52:15.677800  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 11:52:15.702344  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:52:15.724326  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/newest-cni-586043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 11:52:15.746037  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/ssl/certs/1166062.pem --> /usr/share/ca-certificates/1166062.pem (1708 bytes)
	I0203 11:52:15.768136  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:52:15.790221  175844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20354-109432/.minikube/certs/116606.pem --> /usr/share/ca-certificates/116606.pem (1338 bytes)
	I0203 11:52:15.812120  175844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:52:15.828614  175844 ssh_runner.go:195] Run: openssl version
	I0203 11:52:15.834594  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116606.pem && ln -fs /usr/share/ca-certificates/116606.pem /etc/ssl/certs/116606.pem"
	I0203 11:52:15.845364  175844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116606.pem
	I0203 11:52:15.849706  175844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:41 /usr/share/ca-certificates/116606.pem
	I0203 11:52:15.849770  175844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116606.pem
	I0203 11:52:15.855545  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116606.pem /etc/ssl/certs/51391683.0"
	I0203 11:52:15.866161  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1166062.pem && ln -fs /usr/share/ca-certificates/1166062.pem /etc/ssl/certs/1166062.pem"
	I0203 11:52:15.876957  175844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1166062.pem
	I0203 11:52:15.881522  175844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:41 /usr/share/ca-certificates/1166062.pem
	I0203 11:52:15.881602  175844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1166062.pem
	I0203 11:52:15.887046  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1166062.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:52:15.897606  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:52:15.908452  175844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:52:15.912883  175844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:52:15.912951  175844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:52:15.918459  175844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:52:15.928802  175844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:52:15.933142  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0203 11:52:15.938806  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0203 11:52:15.944291  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0203 11:52:15.949834  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0203 11:52:15.955213  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0203 11:52:15.960551  175844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0203 11:52:15.965905  175844 kubeadm.go:392] StartCluster: {Name:newest-cni-586043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-586043 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:52:15.965992  175844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0203 11:52:15.966055  175844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:52:16.005648  175844 cri.go:89] found id: ""
	I0203 11:52:16.005716  175844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 11:52:16.015599  175844 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0203 11:52:16.015623  175844 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0203 11:52:16.015672  175844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0203 11:52:16.024927  175844 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0203 11:52:16.025481  175844 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-586043" does not appear in /home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:52:16.025667  175844 kubeconfig.go:62] /home/jenkins/minikube-integration/20354-109432/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-586043" cluster setting kubeconfig missing "newest-cni-586043" context setting]
	I0203 11:52:16.025988  175844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/kubeconfig: {Name:mkcb7c4c45c6b828504faaa9fea59b0bb0855286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:52:16.028966  175844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0203 11:52:16.038295  175844 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.151
	I0203 11:52:16.038346  175844 kubeadm.go:1160] stopping kube-system containers ...
	I0203 11:52:16.038363  175844 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0203 11:52:16.038415  175844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0203 11:52:16.076934  175844 cri.go:89] found id: ""
	I0203 11:52:16.077021  175844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 11:52:16.093360  175844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:52:16.102923  175844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:52:16.102952  175844 kubeadm.go:157] found existing configuration files:
	
	I0203 11:52:16.103002  175844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:52:16.111845  175844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:52:16.111910  175844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:52:16.121141  175844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:52:16.129822  175844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:52:16.129886  175844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:52:16.138692  175844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:52:16.147297  175844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:52:16.147368  175844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:52:16.157136  175844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:52:16.166841  175844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:52:16.166927  175844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:52:16.176387  175844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:52:16.185620  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:16.308286  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:17.428161  175844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.119820981s)
	I0203 11:52:17.428197  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:17.617442  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:17.710553  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:17.786236  175844 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:52:17.786327  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:18.287335  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:18.787276  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:19.287247  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:19.787249  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:19.832243  175844 api_server.go:72] duration metric: took 2.046005993s to wait for apiserver process to appear ...
	I0203 11:52:19.832296  175844 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:52:19.832324  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:19.832848  175844 api_server.go:269] stopped: https://192.168.72.151:8443/healthz: Get "https://192.168.72.151:8443/healthz": dial tcp 192.168.72.151:8443: connect: connection refused
	I0203 11:52:20.333113  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:22.593112  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 11:52:22.593149  175844 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 11:52:22.593168  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:22.615767  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 11:52:22.615799  175844 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 11:52:22.833274  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:22.838649  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 11:52:22.838680  175844 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 11:52:23.333376  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:23.338020  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 11:52:23.338047  175844 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 11:52:23.832467  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:23.836670  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0203 11:52:23.842741  175844 api_server.go:141] control plane version: v1.32.1
	I0203 11:52:23.842765  175844 api_server.go:131] duration metric: took 4.010461718s to wait for apiserver health ...
	I0203 11:52:23.842774  175844 cni.go:84] Creating CNI manager for ""
	I0203 11:52:23.842781  175844 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 11:52:23.844446  175844 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0203 11:52:23.845620  175844 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 11:52:23.878399  175844 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0203 11:52:23.908467  175844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:52:23.916662  175844 system_pods.go:59] 8 kube-system pods found
	I0203 11:52:23.916703  175844 system_pods.go:61] "coredns-668d6bf9bc-cr5dw" [3d1b7381-6b42-4160-ba9d-6fddc2408174] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0203 11:52:23.916712  175844 system_pods.go:61] "etcd-newest-cni-586043" [16317397-91b4-459d-a91f-ce10dc19f0c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 11:52:23.916721  175844 system_pods.go:61] "kube-apiserver-newest-cni-586043" [79bd9928-7593-4eda-a9d6-fe3fe263c33a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 11:52:23.916727  175844 system_pods.go:61] "kube-controller-manager-newest-cni-586043" [8a00cc32-1347-42f0-b92b-ecf548236642] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 11:52:23.916733  175844 system_pods.go:61] "kube-proxy-c4bgm" [1a4f7c54-c137-401a-b004-2c93f251a646] Running
	I0203 11:52:23.916738  175844 system_pods.go:61] "kube-scheduler-newest-cni-586043" [e796e345-ebf7-4e6f-86d8-357cade7d05b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0203 11:52:23.916743  175844 system_pods.go:61] "metrics-server-f79f97bbb-w4v6r" [5c20a6e1-46c0-43fb-8057-90f4d2fc6d7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0203 11:52:23.916750  175844 system_pods.go:61] "storage-provisioner" [9720ea0d-98d4-4916-8e71-71a4e7a080d7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0203 11:52:23.916765  175844 system_pods.go:74] duration metric: took 8.272337ms to wait for pod list to return data ...
	I0203 11:52:23.916777  175844 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:52:23.920379  175844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:52:23.920405  175844 node_conditions.go:123] node cpu capacity is 2
	I0203 11:52:23.920416  175844 node_conditions.go:105] duration metric: took 3.634031ms to run NodePressure ...
	I0203 11:52:23.920432  175844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 11:52:24.231056  175844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 11:52:24.244201  175844 ops.go:34] apiserver oom_adj: -16
	I0203 11:52:24.244230  175844 kubeadm.go:597] duration metric: took 8.228599887s to restartPrimaryControlPlane
	I0203 11:52:24.244242  175844 kubeadm.go:394] duration metric: took 8.278345475s to StartCluster
	I0203 11:52:24.244264  175844 settings.go:142] acquiring lock: {Name:mk7f08542cc4ae303b222901a9d369cc0753d51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:52:24.244357  175844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:52:24.245400  175844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/kubeconfig: {Name:mkcb7c4c45c6b828504faaa9fea59b0bb0855286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:52:24.245703  175844 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0203 11:52:24.245788  175844 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0203 11:52:24.245905  175844 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-586043"
	I0203 11:52:24.245926  175844 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-586043"
	W0203 11:52:24.245937  175844 addons.go:247] addon storage-provisioner should already be in state true
	I0203 11:52:24.245931  175844 addons.go:69] Setting default-storageclass=true in profile "newest-cni-586043"
	I0203 11:52:24.245943  175844 addons.go:69] Setting metrics-server=true in profile "newest-cni-586043"
	I0203 11:52:24.245967  175844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-586043"
	I0203 11:52:24.245974  175844 addons.go:238] Setting addon metrics-server=true in "newest-cni-586043"
	I0203 11:52:24.245979  175844 addons.go:69] Setting dashboard=true in profile "newest-cni-586043"
	I0203 11:52:24.246021  175844 config.go:182] Loaded profile config "newest-cni-586043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:52:24.246026  175844 addons.go:238] Setting addon dashboard=true in "newest-cni-586043"
	W0203 11:52:24.246043  175844 addons.go:247] addon dashboard should already be in state true
	W0203 11:52:24.246056  175844 addons.go:247] addon metrics-server should already be in state true
	I0203 11:52:24.246096  175844 host.go:66] Checking if "newest-cni-586043" exists ...
	I0203 11:52:24.246136  175844 host.go:66] Checking if "newest-cni-586043" exists ...
	I0203 11:52:24.245971  175844 host.go:66] Checking if "newest-cni-586043" exists ...
	I0203 11:52:24.246487  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.246541  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.246546  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.246575  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.246627  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.246637  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.246653  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.246581  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.249614  175844 out.go:177] * Verifying Kubernetes components...
	I0203 11:52:24.250962  175844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:52:24.264646  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0203 11:52:24.265348  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.266025  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.266044  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.267072  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40539
	I0203 11:52:24.267076  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33521
	I0203 11:52:24.267102  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.267178  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0203 11:52:24.267615  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.267668  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.267618  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.267685  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.267825  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.268155  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.268178  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.268196  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.268242  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.268528  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.268584  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.269015  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.269099  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.269135  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.269171  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.269197  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.269701  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.270284  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.270327  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.273950  175844 addons.go:238] Setting addon default-storageclass=true in "newest-cni-586043"
	W0203 11:52:24.273978  175844 addons.go:247] addon default-storageclass should already be in state true
	I0203 11:52:24.274042  175844 host.go:66] Checking if "newest-cni-586043" exists ...
	I0203 11:52:24.274412  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.274462  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.289025  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39021
	I0203 11:52:24.289035  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44393
	I0203 11:52:24.289630  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.289674  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.290176  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.290206  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.290318  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.290332  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.290650  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.290878  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.290901  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.291096  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.293650  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:24.293656  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:24.295600  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I0203 11:52:24.296118  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.296790  175844 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0203 11:52:24.297034  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.297192  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.297586  175844 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0203 11:52:24.297621  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.298235  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.298334  175844 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0203 11:52:24.298350  175844 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0203 11:52:24.298376  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:24.298562  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41007
	I0203 11:52:24.298934  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.299547  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.299566  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.299918  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.300034  175844 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0203 11:52:24.300652  175844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:52:24.300697  175844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:52:24.301008  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0203 11:52:24.301026  175844 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0203 11:52:24.301046  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:24.301895  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:24.303400  175844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:52:24.304035  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.304500  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.304487  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:24.304531  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.304644  175844 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:52:24.304655  175844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 11:52:24.304667  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:24.305128  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:24.305143  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.305147  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:24.305387  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:24.305405  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:24.305612  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:24.305657  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:24.305776  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:24.305795  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:24.305907  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:24.307560  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.307791  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:24.307818  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.307956  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:24.308107  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:24.308228  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:24.308344  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:24.341888  175844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0203 11:52:24.342380  175844 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:52:24.342876  175844 main.go:141] libmachine: Using API Version  1
	I0203 11:52:24.342908  175844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:52:24.343222  175844 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:52:24.343423  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetState
	I0203 11:52:24.345056  175844 main.go:141] libmachine: (newest-cni-586043) Calling .DriverName
	I0203 11:52:24.345281  175844 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 11:52:24.345300  175844 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 11:52:24.345321  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHHostname
	I0203 11:52:24.348062  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.348521  175844 main.go:141] libmachine: (newest-cni-586043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:62:16", ip: ""} in network mk-newest-cni-586043: {Iface:virbr4 ExpiryTime:2025-02-03 12:51:06 +0000 UTC Type:0 Mac:52:54:00:47:62:16 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:newest-cni-586043 Clientid:01:52:54:00:47:62:16}
	I0203 11:52:24.348557  175844 main.go:141] libmachine: (newest-cni-586043) DBG | domain newest-cni-586043 has defined IP address 192.168.72.151 and MAC address 52:54:00:47:62:16 in network mk-newest-cni-586043
	I0203 11:52:24.348704  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHPort
	I0203 11:52:24.348944  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHKeyPath
	I0203 11:52:24.349105  175844 main.go:141] libmachine: (newest-cni-586043) Calling .GetSSHUsername
	I0203 11:52:24.349239  175844 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/newest-cni-586043/id_rsa Username:docker}
	I0203 11:52:24.423706  175844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:52:24.438635  175844 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:52:24.438729  175844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:52:24.451466  175844 api_server.go:72] duration metric: took 205.720039ms to wait for apiserver process to appear ...
	I0203 11:52:24.451494  175844 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:52:24.451512  175844 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0203 11:52:24.455975  175844 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0203 11:52:24.456944  175844 api_server.go:141] control plane version: v1.32.1
	I0203 11:52:24.456960  175844 api_server.go:131] duration metric: took 5.461365ms to wait for apiserver health ...
	I0203 11:52:24.456967  175844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:52:24.462575  175844 system_pods.go:59] 8 kube-system pods found
	I0203 11:52:24.462602  175844 system_pods.go:61] "coredns-668d6bf9bc-cr5dw" [3d1b7381-6b42-4160-ba9d-6fddc2408174] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0203 11:52:24.462609  175844 system_pods.go:61] "etcd-newest-cni-586043" [16317397-91b4-459d-a91f-ce10dc19f0c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 11:52:24.462618  175844 system_pods.go:61] "kube-apiserver-newest-cni-586043" [79bd9928-7593-4eda-a9d6-fe3fe263c33a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 11:52:24.462624  175844 system_pods.go:61] "kube-controller-manager-newest-cni-586043" [8a00cc32-1347-42f0-b92b-ecf548236642] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 11:52:24.462630  175844 system_pods.go:61] "kube-proxy-c4bgm" [1a4f7c54-c137-401a-b004-2c93f251a646] Running
	I0203 11:52:24.462636  175844 system_pods.go:61] "kube-scheduler-newest-cni-586043" [e796e345-ebf7-4e6f-86d8-357cade7d05b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0203 11:52:24.462640  175844 system_pods.go:61] "metrics-server-f79f97bbb-w4v6r" [5c20a6e1-46c0-43fb-8057-90f4d2fc6d7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0203 11:52:24.462646  175844 system_pods.go:61] "storage-provisioner" [9720ea0d-98d4-4916-8e71-71a4e7a080d7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0203 11:52:24.462651  175844 system_pods.go:74] duration metric: took 5.679512ms to wait for pod list to return data ...
	I0203 11:52:24.462661  175844 default_sa.go:34] waiting for default service account to be created ...
	I0203 11:52:24.464973  175844 default_sa.go:45] found service account: "default"
	I0203 11:52:24.464991  175844 default_sa.go:55] duration metric: took 2.324944ms for default service account to be created ...
	I0203 11:52:24.465002  175844 kubeadm.go:582] duration metric: took 219.259944ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0203 11:52:24.465020  175844 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:52:24.467037  175844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:52:24.467054  175844 node_conditions.go:123] node cpu capacity is 2
	I0203 11:52:24.467064  175844 node_conditions.go:105] duration metric: took 2.039421ms to run NodePressure ...
	I0203 11:52:24.467074  175844 start.go:241] waiting for startup goroutines ...
	I0203 11:52:24.510558  175844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 11:52:24.518267  175844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0203 11:52:24.518302  175844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0203 11:52:24.539840  175844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0203 11:52:24.539866  175844 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0203 11:52:24.569697  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0203 11:52:24.569727  175844 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0203 11:52:24.583824  175844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:52:24.598897  175844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0203 11:52:24.598921  175844 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0203 11:52:24.610164  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0203 11:52:24.610188  175844 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0203 11:52:24.677539  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0203 11:52:24.677569  175844 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0203 11:52:24.700565  175844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0203 11:52:24.799702  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0203 11:52:24.799733  175844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0203 11:52:24.916536  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0203 11:52:24.916568  175844 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0203 11:52:25.033797  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0203 11:52:25.033826  175844 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0203 11:52:25.062256  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:25.062298  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:25.062596  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:25.062614  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:25.062622  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:25.062629  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:25.062867  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:25.062887  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:25.091731  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:25.091759  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:25.092053  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:25.092073  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:25.092088  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:25.130272  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0203 11:52:25.130306  175844 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0203 11:52:25.184756  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0203 11:52:25.184789  175844 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0203 11:52:25.245270  175844 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0203 11:52:25.245304  175844 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0203 11:52:25.294755  175844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0203 11:52:26.053809  175844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.469944422s)
	I0203 11:52:26.053870  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.053884  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.054221  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:26.054266  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.054293  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.054313  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.054324  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.054556  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:26.054575  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.054586  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.087724  175844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.387111638s)
	I0203 11:52:26.087790  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.087808  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.088123  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.088159  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.088184  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.088200  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:26.088206  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.088502  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.088531  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.088545  175844 addons.go:479] Verifying addon metrics-server=true in "newest-cni-586043"
	I0203 11:52:26.531533  175844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.236731103s)
	I0203 11:52:26.531586  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.531597  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.532020  175844 main.go:141] libmachine: (newest-cni-586043) DBG | Closing plugin on server side
	I0203 11:52:26.532039  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.532055  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.532069  175844 main.go:141] libmachine: Making call to close driver server
	I0203 11:52:26.532081  175844 main.go:141] libmachine: (newest-cni-586043) Calling .Close
	I0203 11:52:26.532328  175844 main.go:141] libmachine: Successfully made call to close driver server
	I0203 11:52:26.532345  175844 main.go:141] libmachine: Making call to close connection to plugin binary
	I0203 11:52:26.533858  175844 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-586043 addons enable metrics-server
	
	I0203 11:52:26.535168  175844 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0203 11:52:26.536259  175844 addons.go:514] duration metric: took 2.290478763s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0203 11:52:26.536307  175844 start.go:246] waiting for cluster config update ...
	I0203 11:52:26.536322  175844 start.go:255] writing updated cluster config ...
	I0203 11:52:26.536548  175844 ssh_runner.go:195] Run: rm -f paused
	I0203 11:52:26.583516  175844 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0203 11:52:26.585070  175844 out.go:177] * Done! kubectl is now configured to use "newest-cni-586043" cluster and "default" namespace by default
	I0203 11:52:35.872135  173069 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0203 11:52:35.872966  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:52:35.873172  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:52:40.873720  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:52:40.873968  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:52:50.874520  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:52:50.874761  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:53:10.875767  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:53:10.876032  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:53:50.878348  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:53:50.878572  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:53:50.878585  173069 kubeadm.go:310] 
	I0203 11:53:50.878677  173069 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0203 11:53:50.878746  173069 kubeadm.go:310] 		timed out waiting for the condition
	I0203 11:53:50.878756  173069 kubeadm.go:310] 
	I0203 11:53:50.878805  173069 kubeadm.go:310] 	This error is likely caused by:
	I0203 11:53:50.878848  173069 kubeadm.go:310] 		- The kubelet is not running
	I0203 11:53:50.878993  173069 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 11:53:50.879004  173069 kubeadm.go:310] 
	I0203 11:53:50.879145  173069 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 11:53:50.879192  173069 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0203 11:53:50.879235  173069 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0203 11:53:50.879245  173069 kubeadm.go:310] 
	I0203 11:53:50.879390  173069 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 11:53:50.879507  173069 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 11:53:50.879517  173069 kubeadm.go:310] 
	I0203 11:53:50.879660  173069 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0203 11:53:50.879782  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0203 11:53:50.879904  173069 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0203 11:53:50.880019  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0203 11:53:50.880033  173069 kubeadm.go:310] 
	I0203 11:53:50.880322  173069 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:53:50.880397  173069 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 11:53:50.880465  173069 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0203 11:53:50.880620  173069 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0203 11:53:50.880666  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0203 11:53:56.208593  173069 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.327900088s)
	I0203 11:53:56.208687  173069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:53:56.222067  173069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:53:56.231274  173069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:53:56.231296  173069 kubeadm.go:157] found existing configuration files:
	
	I0203 11:53:56.231344  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:53:56.240522  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:53:56.240587  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:53:56.249755  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:53:56.258586  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:53:56.258645  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:53:56.267974  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:53:56.276669  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:53:56.276720  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:53:56.285661  173069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:53:56.294673  173069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:53:56.294734  173069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:53:56.303819  173069 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:53:56.510714  173069 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:55:52.911681  173069 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 11:55:52.911777  173069 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0203 11:55:52.913157  173069 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0203 11:55:52.913224  173069 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:55:52.913299  173069 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:55:52.913463  173069 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:55:52.913598  173069 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 11:55:52.913672  173069 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:55:52.915764  173069 out.go:235]   - Generating certificates and keys ...
	I0203 11:55:52.915857  173069 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:55:52.915908  173069 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:55:52.915975  173069 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 11:55:52.916023  173069 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0203 11:55:52.916077  173069 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 11:55:52.916150  173069 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0203 11:55:52.916233  173069 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0203 11:55:52.916309  173069 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0203 11:55:52.916424  173069 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 11:55:52.916508  173069 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 11:55:52.916542  173069 kubeadm.go:310] [certs] Using the existing "sa" key
	I0203 11:55:52.916589  173069 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:55:52.916635  173069 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:55:52.916682  173069 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:55:52.916747  173069 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:55:52.916798  173069 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:55:52.916898  173069 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:55:52.916991  173069 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:55:52.917027  173069 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:55:52.917082  173069 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:55:52.918947  173069 out.go:235]   - Booting up control plane ...
	I0203 11:55:52.919052  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:55:52.919135  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:55:52.919213  173069 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:55:52.919298  173069 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:55:52.919440  173069 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 11:55:52.919509  173069 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0203 11:55:52.919578  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.919738  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.919799  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.919950  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920007  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.920158  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920230  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.920452  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920558  173069 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 11:55:52.920806  173069 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 11:55:52.920815  173069 kubeadm.go:310] 
	I0203 11:55:52.920849  173069 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0203 11:55:52.920884  173069 kubeadm.go:310] 		timed out waiting for the condition
	I0203 11:55:52.920891  173069 kubeadm.go:310] 
	I0203 11:55:52.920924  173069 kubeadm.go:310] 	This error is likely caused by:
	I0203 11:55:52.920954  173069 kubeadm.go:310] 		- The kubelet is not running
	I0203 11:55:52.921051  173069 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 11:55:52.921066  173069 kubeadm.go:310] 
	I0203 11:55:52.921160  173069 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 11:55:52.921199  173069 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0203 11:55:52.921228  173069 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0203 11:55:52.921235  173069 kubeadm.go:310] 
	I0203 11:55:52.921355  173069 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 11:55:52.921465  173069 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 11:55:52.921476  173069 kubeadm.go:310] 
	I0203 11:55:52.921595  173069 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0203 11:55:52.921666  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0203 11:55:52.921725  173069 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0203 11:55:52.921781  173069 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0203 11:55:52.921820  173069 kubeadm.go:310] 
	I0203 11:55:52.921866  173069 kubeadm.go:394] duration metric: took 8m3.159723737s to StartCluster
	I0203 11:55:52.921917  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0203 11:55:52.921979  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0203 11:55:52.965327  173069 cri.go:89] found id: ""
	I0203 11:55:52.965360  173069 logs.go:282] 0 containers: []
	W0203 11:55:52.965370  173069 logs.go:284] No container was found matching "kube-apiserver"
	I0203 11:55:52.965377  173069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0203 11:55:52.965429  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0203 11:55:52.999197  173069 cri.go:89] found id: ""
	I0203 11:55:52.999224  173069 logs.go:282] 0 containers: []
	W0203 11:55:52.999233  173069 logs.go:284] No container was found matching "etcd"
	I0203 11:55:52.999239  173069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0203 11:55:52.999290  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0203 11:55:53.033201  173069 cri.go:89] found id: ""
	I0203 11:55:53.033231  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.033239  173069 logs.go:284] No container was found matching "coredns"
	I0203 11:55:53.033245  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0203 11:55:53.033298  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0203 11:55:53.069227  173069 cri.go:89] found id: ""
	I0203 11:55:53.069262  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.069274  173069 logs.go:284] No container was found matching "kube-scheduler"
	I0203 11:55:53.069282  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0203 11:55:53.069361  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0203 11:55:53.102418  173069 cri.go:89] found id: ""
	I0203 11:55:53.102448  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.102460  173069 logs.go:284] No container was found matching "kube-proxy"
	I0203 11:55:53.102467  173069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0203 11:55:53.102595  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0203 11:55:53.134815  173069 cri.go:89] found id: ""
	I0203 11:55:53.134846  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.134859  173069 logs.go:284] No container was found matching "kube-controller-manager"
	I0203 11:55:53.134865  173069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0203 11:55:53.134916  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0203 11:55:53.184017  173069 cri.go:89] found id: ""
	I0203 11:55:53.184063  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.184075  173069 logs.go:284] No container was found matching "kindnet"
	I0203 11:55:53.184086  173069 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0203 11:55:53.184180  173069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0203 11:55:53.218584  173069 cri.go:89] found id: ""
	I0203 11:55:53.218620  173069 logs.go:282] 0 containers: []
	W0203 11:55:53.218630  173069 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0203 11:55:53.218642  173069 logs.go:123] Gathering logs for kubelet ...
	I0203 11:55:53.218656  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 11:55:53.267577  173069 logs.go:123] Gathering logs for dmesg ...
	I0203 11:55:53.267624  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 11:55:53.280882  173069 logs.go:123] Gathering logs for describe nodes ...
	I0203 11:55:53.280915  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 11:55:53.352344  173069 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 11:55:53.352371  173069 logs.go:123] Gathering logs for CRI-O ...
	I0203 11:55:53.352385  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0203 11:55:53.451451  173069 logs.go:123] Gathering logs for container status ...
	I0203 11:55:53.451495  173069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0203 11:55:53.488076  173069 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0203 11:55:53.488133  173069 out.go:270] * 
	W0203 11:55:53.488199  173069 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 11:55:53.488213  173069 out.go:270] * 
	W0203 11:55:53.489069  173069 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 11:55:53.492291  173069 out.go:201] 
	W0203 11:55:53.493552  173069 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 11:55:53.493606  173069 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0203 11:55:53.493647  173069 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0203 11:55:53.494859  173069 out.go:201] 
	
	
	==> CRI-O <==
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.667716645Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738584687667695373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9dc96ae-52af-409b-ad57-8b1796db879b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.668166032Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70b26984-1640-40df-a789-e15542309099 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.668222375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70b26984-1640-40df-a789-e15542309099 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.668264500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=70b26984-1640-40df-a789-e15542309099 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.701808382Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d6ea235-81ec-4878-9b23-3a2a1678ab81 name=/runtime.v1.RuntimeService/Version
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.701903135Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d6ea235-81ec-4878-9b23-3a2a1678ab81 name=/runtime.v1.RuntimeService/Version
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.702994778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88ec311a-6bc5-47bd-b8b9-c4a912f5869b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.703437244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738584687703402433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88ec311a-6bc5-47bd-b8b9-c4a912f5869b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.703913475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ce8a1c7-dde6-4790-825c-74ea50019d3a name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.703970588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ce8a1c7-dde6-4790-825c-74ea50019d3a name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.704004092Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2ce8a1c7-dde6-4790-825c-74ea50019d3a name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.732166209Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a400013-751a-46e1-9fb7-b46355f10ab3 name=/runtime.v1.RuntimeService/Version
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.732266804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a400013-751a-46e1-9fb7-b46355f10ab3 name=/runtime.v1.RuntimeService/Version
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.733530084Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59bfd340-fbb2-481e-a86c-415022f61d92 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.733962539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738584687733942139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59bfd340-fbb2-481e-a86c-415022f61d92 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.734472357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9a91511-491b-4739-9810-fb8307bfda51 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.734563349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9a91511-491b-4739-9810-fb8307bfda51 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.734602522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b9a91511-491b-4739-9810-fb8307bfda51 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.765570083Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5c00a67-9ab2-469d-895c-ed79975607b9 name=/runtime.v1.RuntimeService/Version
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.765671073Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5c00a67-9ab2-469d-895c-ed79975607b9 name=/runtime.v1.RuntimeService/Version
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.767090456Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3248eb89-7792-4748-a053-6e5f8c5d7ae7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.767503662Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738584687767481503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3248eb89-7792-4748-a053-6e5f8c5d7ae7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.768192683Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae9e2d09-e354-459b-8543-7ea68bcdde56 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.768251350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae9e2d09-e354-459b-8543-7ea68bcdde56 name=/runtime.v1.RuntimeService/ListContainers
	Feb 03 12:11:27 old-k8s-version-517711 crio[637]: time="2025-02-03 12:11:27.768285243Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ae9e2d09-e354-459b-8543-7ea68bcdde56 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb 3 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054598] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038441] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.998629] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.169563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.572597] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.331149] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.081564] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074399] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.170591] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.142363] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.233678] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +6.346278] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.064365] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.290562] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[Feb 3 11:48] kauditd_printk_skb: 46 callbacks suppressed
	[Feb 3 11:51] systemd-fstab-generator[5068]: Ignoring "noauto" option for root device
	[Feb 3 11:53] systemd-fstab-generator[5353]: Ignoring "noauto" option for root device
	[  +0.065405] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:11:27 up 24 min,  0 users,  load average: 0.00, 0.02, 0.02
	Linux old-k8s-version-517711 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc00093c100, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b395f0, 0x24, 0x60, 0x7f69a5c2a788, 0x118, ...)
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]: net/http.(*Transport).dial(0xc000597e00, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b395f0, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]: net/http.(*Transport).dialConn(0xc000597e00, 0x4f7fe00, 0xc000120018, 0x0, 0xc0002e2540, 0x5, 0xc000b395f0, 0x24, 0x0, 0xc000b7c360, ...)
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]: net/http.(*Transport).dialConnFor(0xc000597e00, 0xc0009a1340)
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]: created by net/http.(*Transport).queueForDial
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]: goroutine 174 [select]:
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]: net.(*netFD).connect.func2(0x4f7fe40, 0xc0003936e0, 0xc00018fd00, 0xc000b595c0, 0xc000b59560)
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]: created by net.(*netFD).connect
	Feb 03 12:11:24 old-k8s-version-517711 kubelet[7228]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Feb 03 12:11:24 old-k8s-version-517711 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 03 12:11:24 old-k8s-version-517711 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 03 12:11:25 old-k8s-version-517711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 182.
	Feb 03 12:11:25 old-k8s-version-517711 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 03 12:11:25 old-k8s-version-517711 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 03 12:11:25 old-k8s-version-517711 kubelet[7237]: I0203 12:11:25.381934    7237 server.go:416] Version: v1.20.0
	Feb 03 12:11:25 old-k8s-version-517711 kubelet[7237]: I0203 12:11:25.382140    7237 server.go:837] Client rotation is on, will bootstrap in background
	Feb 03 12:11:25 old-k8s-version-517711 kubelet[7237]: I0203 12:11:25.383908    7237 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 03 12:11:25 old-k8s-version-517711 kubelet[7237]: W0203 12:11:25.384771    7237 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 03 12:11:25 old-k8s-version-517711 kubelet[7237]: I0203 12:11:25.385111    7237 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-517711 -n old-k8s-version-517711
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 2 (231.250376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-517711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (391.55s)

                                                
                                    

Test pass (270/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.33
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.32.1/json-events 14.45
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.14
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 84.91
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 136.97
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 11.5
35 TestAddons/parallel/Registry 18.41
37 TestAddons/parallel/InspektorGadget 12.72
38 TestAddons/parallel/MetricsServer 6.37
40 TestAddons/parallel/CSI 57.04
41 TestAddons/parallel/Headlamp 17.97
42 TestAddons/parallel/CloudSpanner 6.67
43 TestAddons/parallel/LocalPath 58.09
44 TestAddons/parallel/NvidiaDevicePlugin 5.52
45 TestAddons/parallel/Yakd 10.76
47 TestAddons/StoppedEnableDisable 91.27
48 TestCertOptions 62.25
49 TestCertExpiration 311.74
51 TestForceSystemdFlag 103.26
52 TestForceSystemdEnv 44.63
54 TestKVMDriverInstallOrUpdate 3.9
58 TestErrorSpam/setup 42.61
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.77
61 TestErrorSpam/pause 1.57
62 TestErrorSpam/unpause 1.73
63 TestErrorSpam/stop 6.01
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 55.41
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 54.92
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.83
75 TestFunctional/serial/CacheCmd/cache/add_local 2.55
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.15
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 33.11
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.34
86 TestFunctional/serial/LogsFileCmd 1.37
87 TestFunctional/serial/InvalidService 4.2
89 TestFunctional/parallel/ConfigCmd 0.36
90 TestFunctional/parallel/DashboardCmd 30.81
91 TestFunctional/parallel/DryRun 0.31
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 1.17
97 TestFunctional/parallel/ServiceCmdConnect 10.53
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 50
101 TestFunctional/parallel/SSHCmd 0.42
102 TestFunctional/parallel/CpCmd 1.34
103 TestFunctional/parallel/MySQL 24.4
104 TestFunctional/parallel/FileSync 0.2
105 TestFunctional/parallel/CertSync 1.23
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
113 TestFunctional/parallel/License 1.55
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
119 TestFunctional/parallel/ImageCommands/Setup 1.74
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.19
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.63
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
137 TestFunctional/parallel/Version/short 0.06
138 TestFunctional/parallel/Version/components 0.87
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
142 TestFunctional/parallel/ServiceCmd/List 0.26
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
145 TestFunctional/parallel/ServiceCmd/Format 0.44
146 TestFunctional/parallel/ServiceCmd/URL 0.43
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
148 TestFunctional/parallel/MountCmd/any-port 22.07
149 TestFunctional/parallel/ProfileCmd/profile_list 0.46
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
151 TestFunctional/parallel/MountCmd/specific-port 1.95
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.61
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 198.39
161 TestMultiControlPlane/serial/DeployApp 8.05
162 TestMultiControlPlane/serial/PingHostFromPods 1.19
163 TestMultiControlPlane/serial/AddWorkerNode 54.86
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
166 TestMultiControlPlane/serial/CopyFile 13.08
167 TestMultiControlPlane/serial/StopSecondaryNode 91.67
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
169 TestMultiControlPlane/serial/RestartSecondaryNode 51.13
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 426.19
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.16
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
174 TestMultiControlPlane/serial/StopCluster 272.95
175 TestMultiControlPlane/serial/RestartCluster 120.87
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
177 TestMultiControlPlane/serial/AddSecondaryNode 76.76
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
182 TestJSONOutput/start/Command 57.68
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.67
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.61
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.33
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.2
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 90.4
214 TestMountStart/serial/StartWithMountFirst 29.72
215 TestMountStart/serial/VerifyMountFirst 0.39
216 TestMountStart/serial/StartWithMountSecond 31.86
217 TestMountStart/serial/VerifyMountSecond 0.38
218 TestMountStart/serial/DeleteFirst 0.7
219 TestMountStart/serial/VerifyMountPostDelete 0.38
220 TestMountStart/serial/Stop 1.28
221 TestMountStart/serial/RestartStopped 24.56
222 TestMountStart/serial/VerifyMountPostStop 0.38
225 TestMultiNode/serial/FreshStart2Nodes 114.97
226 TestMultiNode/serial/DeployApp2Nodes 5.58
227 TestMultiNode/serial/PingHostFrom2Pods 0.77
228 TestMultiNode/serial/AddNode 50.71
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.61
231 TestMultiNode/serial/CopyFile 7.33
232 TestMultiNode/serial/StopNode 2.26
233 TestMultiNode/serial/StartAfterStop 39.11
234 TestMultiNode/serial/RestartKeepsNodes 342.77
235 TestMultiNode/serial/DeleteNode 2.61
236 TestMultiNode/serial/StopMultiNode 181.87
237 TestMultiNode/serial/RestartMultiNode 115.96
238 TestMultiNode/serial/ValidateNameConflict 44.39
245 TestScheduledStopUnix 115.57
249 TestRunningBinaryUpgrade 233.52
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 96.2
256 TestNoKubernetes/serial/StartWithStopK8s 69.67
257 TestNoKubernetes/serial/Start 52.22
265 TestNetworkPlugins/group/false 3.96
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
270 TestNoKubernetes/serial/ProfileList 3.78
271 TestNoKubernetes/serial/Stop 2.31
272 TestNoKubernetes/serial/StartNoArgs 43.86
273 TestStoppedBinaryUpgrade/Setup 3.11
274 TestStoppedBinaryUpgrade/Upgrade 124.35
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
284 TestPause/serial/Start 71.85
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
287 TestNetworkPlugins/group/auto/Start 50.11
288 TestNetworkPlugins/group/kindnet/Start 68.8
289 TestNetworkPlugins/group/calico/Start 107.75
290 TestNetworkPlugins/group/auto/KubeletFlags 0.21
291 TestNetworkPlugins/group/auto/NetCatPod 9.33
292 TestNetworkPlugins/group/auto/DNS 0.29
293 TestNetworkPlugins/group/auto/Localhost 0.13
294 TestNetworkPlugins/group/auto/HairPin 0.13
295 TestNetworkPlugins/group/custom-flannel/Start 80.19
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
297 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
298 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
299 TestNetworkPlugins/group/kindnet/DNS 0.15
300 TestNetworkPlugins/group/kindnet/Localhost 0.14
301 TestNetworkPlugins/group/kindnet/HairPin 0.15
302 TestNetworkPlugins/group/enable-default-cni/Start 75.01
303 TestNetworkPlugins/group/calico/ControllerPod 6.01
304 TestNetworkPlugins/group/calico/KubeletFlags 0.21
305 TestNetworkPlugins/group/calico/NetCatPod 10.24
306 TestNetworkPlugins/group/calico/DNS 0.19
307 TestNetworkPlugins/group/calico/Localhost 0.15
308 TestNetworkPlugins/group/calico/HairPin 0.15
309 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
310 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.89
311 TestNetworkPlugins/group/custom-flannel/DNS 0.15
312 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
313 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
314 TestNetworkPlugins/group/flannel/Start 78.35
315 TestNetworkPlugins/group/bridge/Start 79.6
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.34
318 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
319 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
323 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
325 TestNetworkPlugins/group/flannel/NetCatPod 11.29
327 TestStartStop/group/no-preload/serial/FirstStart 80.67
328 TestNetworkPlugins/group/flannel/DNS 0.16
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.48
330 TestNetworkPlugins/group/flannel/Localhost 0.14
331 TestNetworkPlugins/group/flannel/HairPin 0.36
332 TestNetworkPlugins/group/bridge/NetCatPod 13.17
333 TestNetworkPlugins/group/bridge/DNS 0.15
334 TestNetworkPlugins/group/bridge/Localhost 0.12
335 TestNetworkPlugins/group/bridge/HairPin 0.13
337 TestStartStop/group/embed-certs/serial/FirstStart 62.62
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.51
340 TestStartStop/group/no-preload/serial/DeployApp 11.31
341 TestStartStop/group/embed-certs/serial/DeployApp 11.53
342 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.75
343 TestStartStop/group/no-preload/serial/Stop 91.05
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
345 TestStartStop/group/embed-certs/serial/Stop 91.25
346 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.25
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.11
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
350 TestStartStop/group/no-preload/serial/SecondStart 346.91
351 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
352 TestStartStop/group/embed-certs/serial/SecondStart 332.63
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
354 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 346.44
357 TestStartStop/group/old-k8s-version/serial/Stop 6.31
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
361 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 8.01
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
364 TestStartStop/group/embed-certs/serial/Pause 2.6
365 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
367 TestStartStop/group/newest-cni/serial/FirstStart 47.34
368 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
369 TestStartStop/group/no-preload/serial/Pause 2.9
370 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.14
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.08
372 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
373 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.93
374 TestStartStop/group/newest-cni/serial/DeployApp 0
375 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
376 TestStartStop/group/newest-cni/serial/Stop 10.7
377 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
378 TestStartStop/group/newest-cni/serial/SecondStart 37.02
379 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
382 TestStartStop/group/newest-cni/serial/Pause 2.37
x
+
TestDownloadOnly/v1.20.0/json-events (25.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-677633 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-677633 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.326850399s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0203 10:33:08.453003  116606 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0203 10:33:08.453122  116606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-677633
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-677633: exit status 85 (64.852921ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-677633 | jenkins | v1.35.0 | 03 Feb 25 10:32 UTC |          |
	|         | -p download-only-677633        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 10:32:43
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 10:32:43.168605  116618 out.go:345] Setting OutFile to fd 1 ...
	I0203 10:32:43.168701  116618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:32:43.168713  116618 out.go:358] Setting ErrFile to fd 2...
	I0203 10:32:43.168717  116618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:32:43.168922  116618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	W0203 10:32:43.169049  116618 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20354-109432/.minikube/config/config.json: open /home/jenkins/minikube-integration/20354-109432/.minikube/config/config.json: no such file or directory
	I0203 10:32:43.169591  116618 out.go:352] Setting JSON to true
	I0203 10:32:43.170487  116618 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4505,"bootTime":1738574258,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 10:32:43.170597  116618 start.go:139] virtualization: kvm guest
	I0203 10:32:43.173187  116618 out.go:97] [download-only-677633] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0203 10:32:43.173332  116618 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball: no such file or directory
	I0203 10:32:43.173404  116618 notify.go:220] Checking for updates...
	I0203 10:32:43.174754  116618 out.go:169] MINIKUBE_LOCATION=20354
	I0203 10:32:43.176070  116618 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 10:32:43.177559  116618 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 10:32:43.179026  116618 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 10:32:43.180389  116618 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0203 10:32:43.182849  116618 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0203 10:32:43.183118  116618 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 10:32:43.290405  116618 out.go:97] Using the kvm2 driver based on user configuration
	I0203 10:32:43.290457  116618 start.go:297] selected driver: kvm2
	I0203 10:32:43.290464  116618 start.go:901] validating driver "kvm2" against <nil>
	I0203 10:32:43.290808  116618 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 10:32:43.290950  116618 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 10:32:43.307752  116618 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 10:32:43.307820  116618 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 10:32:43.308313  116618 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0203 10:32:43.308469  116618 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0203 10:32:43.308504  116618 cni.go:84] Creating CNI manager for ""
	I0203 10:32:43.308549  116618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 10:32:43.308562  116618 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0203 10:32:43.308625  116618 start.go:340] cluster config:
	{Name:download-only-677633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-677633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 10:32:43.308815  116618 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 10:32:43.310793  116618 out.go:97] Downloading VM boot image ...
	I0203 10:32:43.310839  116618 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20354-109432/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0203 10:32:53.408304  116618 out.go:97] Starting "download-only-677633" primary control-plane node in "download-only-677633" cluster
	I0203 10:32:53.408343  116618 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 10:32:53.502581  116618 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0203 10:32:53.502618  116618 cache.go:56] Caching tarball of preloaded images
	I0203 10:32:53.502828  116618 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 10:32:53.504939  116618 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0203 10:32:53.504968  116618 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0203 10:32:53.602007  116618 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0203 10:33:06.653018  116618 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0203 10:33:06.653125  116618 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0203 10:33:07.676877  116618 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0203 10:33:07.677284  116618 profile.go:143] Saving config to /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/download-only-677633/config.json ...
	I0203 10:33:07.677321  116618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/download-only-677633/config.json: {Name:mk79fbe6ffa783d95512d275dd11163a96c5b05e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:33:07.677492  116618 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0203 10:33:07.677664  116618 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20354-109432/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-677633 host does not exist
	  To start a cluster, run: "minikube start -p download-only-677633"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-677633
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (14.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-730636 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-730636 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.45440851s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (14.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0203 10:33:23.271539  116606 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0203 10:33:23.271622  116606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-730636
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-730636: exit status 85 (63.276665ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-677633 | jenkins | v1.35.0 | 03 Feb 25 10:32 UTC |                     |
	|         | -p download-only-677633        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 03 Feb 25 10:33 UTC | 03 Feb 25 10:33 UTC |
	| delete  | -p download-only-677633        | download-only-677633 | jenkins | v1.35.0 | 03 Feb 25 10:33 UTC | 03 Feb 25 10:33 UTC |
	| start   | -o=json --download-only        | download-only-730636 | jenkins | v1.35.0 | 03 Feb 25 10:33 UTC |                     |
	|         | -p download-only-730636        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 10:33:08
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 10:33:08.859786  116874 out.go:345] Setting OutFile to fd 1 ...
	I0203 10:33:08.859905  116874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:33:08.859922  116874 out.go:358] Setting ErrFile to fd 2...
	I0203 10:33:08.859928  116874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:33:08.860109  116874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 10:33:08.860685  116874 out.go:352] Setting JSON to true
	I0203 10:33:08.861624  116874 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4531,"bootTime":1738574258,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 10:33:08.861738  116874 start.go:139] virtualization: kvm guest
	I0203 10:33:08.864060  116874 out.go:97] [download-only-730636] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 10:33:08.864252  116874 notify.go:220] Checking for updates...
	I0203 10:33:08.865564  116874 out.go:169] MINIKUBE_LOCATION=20354
	I0203 10:33:08.866966  116874 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 10:33:08.868256  116874 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 10:33:08.869415  116874 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 10:33:08.870629  116874 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0203 10:33:08.873063  116874 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0203 10:33:08.873322  116874 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 10:33:08.907448  116874 out.go:97] Using the kvm2 driver based on user configuration
	I0203 10:33:08.907482  116874 start.go:297] selected driver: kvm2
	I0203 10:33:08.907488  116874 start.go:901] validating driver "kvm2" against <nil>
	I0203 10:33:08.907839  116874 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 10:33:08.907929  116874 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20354-109432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0203 10:33:08.924249  116874 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0203 10:33:08.924329  116874 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 10:33:08.924873  116874 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0203 10:33:08.925039  116874 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0203 10:33:08.925072  116874 cni.go:84] Creating CNI manager for ""
	I0203 10:33:08.925130  116874 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0203 10:33:08.925142  116874 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0203 10:33:08.925234  116874 start.go:340] cluster config:
	{Name:download-only-730636 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-730636 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 10:33:08.925359  116874 iso.go:125] acquiring lock: {Name:mk9b6d47934249a6b2a57c0b698dce274826cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 10:33:08.928553  116874 out.go:97] Starting "download-only-730636" primary control-plane node in "download-only-730636" cluster
	I0203 10:33:08.928618  116874 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 10:33:09.410065  116874 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0203 10:33:09.410110  116874 cache.go:56] Caching tarball of preloaded images
	I0203 10:33:09.410285  116874 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0203 10:33:09.412539  116874 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0203 10:33:09.412568  116874 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0203 10:33:09.953822  116874 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2af56a340efcc3949401b47b9a5d537 -> /home/jenkins/minikube-integration/20354-109432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-730636 host does not exist
	  To start a cluster, run: "minikube start -p download-only-730636"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-730636
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0203 10:33:23.869254  116606 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-745865 --alsologtostderr --binary-mirror http://127.0.0.1:43989 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-745865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-745865
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (84.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-162582 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-162582 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.661011623s)
helpers_test.go:175: Cleaning up "offline-crio-162582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-162582
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-162582: (1.245656713s)
--- PASS: TestOffline (84.91s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-106432
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-106432: exit status 85 (52.877928ms)

                                                
                                                
-- stdout --
	* Profile "addons-106432" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-106432"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-106432
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-106432: exit status 85 (51.394699ms)

                                                
                                                
-- stdout --
	* Profile "addons-106432" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-106432"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (136.97s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-106432 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-106432 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m16.96711181s)
--- PASS: TestAddons/Setup (136.97s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-106432 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-106432 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-106432 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-106432 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f036f510-1b88-40c2-9d32-f66a37079606] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f036f510-1b88-40c2-9d32-f66a37079606] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004342063s
addons_test.go:633: (dbg) Run:  kubectl --context addons-106432 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-106432 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-106432 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 9.747652ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-ftnp8" [ffc00625-b39b-43ae-ae8e-ea7a8936124f] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009143397s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hlmqp" [5bb931f2-dd11-41dc-9467-c7cc823a3860] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003617491s
addons_test.go:331: (dbg) Run:  kubectl --context addons-106432 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-106432 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-106432 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.507277149s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 ip
2025/02/03 10:36:19 [DEBUG] GET http://192.168.39.50:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nc6mp" [32ed764f-c6d2-427e-9d15-494d0de186aa] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.047526223s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-106432 addons disable inspektor-gadget --alsologtostderr -v=1: (6.675856032s)
--- PASS: TestAddons/parallel/InspektorGadget (12.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 9.937397ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-cb689" [e4cd7001-8f29-40aa-8ff7-fed7f02eb492] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004316045s
addons_test.go:402: (dbg) Run:  kubectl --context addons-106432 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-106432 addons disable metrics-server --alsologtostderr -v=1: (1.27785789s)
--- PASS: TestAddons/parallel/MetricsServer (6.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0203 10:36:01.801003  116606 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0203 10:36:01.809523  116606 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0203 10:36:01.809559  116606 kapi.go:107] duration metric: took 8.570727ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.582712ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-106432 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-106432 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [61014de7-bf88-4670-90e5-d4861fde6441] Pending
helpers_test.go:344: "task-pv-pod" [61014de7-bf88-4670-90e5-d4861fde6441] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [61014de7-bf88-4670-90e5-d4861fde6441] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.005461424s
addons_test.go:511: (dbg) Run:  kubectl --context addons-106432 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-106432 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-106432 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-106432 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-106432 delete pod task-pv-pod: (1.299635115s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-106432 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-106432 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-106432 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a4c18df3-1117-45c0-bb11-9780f719781f] Pending
helpers_test.go:344: "task-pv-pod-restore" [a4c18df3-1117-45c0-bb11-9780f719781f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a4c18df3-1117-45c0-bb11-9780f719781f] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003920761s
addons_test.go:553: (dbg) Run:  kubectl --context addons-106432 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-106432 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-106432 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-106432 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.723329467s)
--- PASS: TestAddons/parallel/CSI (57.04s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-106432 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-r4dnn" [5dbfd8d0-2897-4ef4-adb5-7c4fcd819c46] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-r4dnn" [5dbfd8d0-2897-4ef4-adb5-7c4fcd819c46] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-r4dnn" [5dbfd8d0-2897-4ef4-adb5-7c4fcd819c46] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004862531s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-106432 addons disable headlamp --alsologtostderr -v=1: (6.059996897s)
--- PASS: TestAddons/parallel/Headlamp (17.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-hzlqt" [6afe5106-62b7-4eb1-85e4-0d2f43d44f76] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003636437s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-106432 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-106432 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106432 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [405e530f-5f39-418b-84ba-b68f1f342f33] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [405e530f-5f39-418b-84ba-b68f1f342f33] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [405e530f-5f39-418b-84ba-b68f1f342f33] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00401338s
addons_test.go:906: (dbg) Run:  kubectl --context addons-106432 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 ssh "cat /opt/local-path-provisioner/pvc-3aa481ba-a49b-47b8-bb6c-20fb974304cd_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-106432 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-106432 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-106432 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.199623271s)
--- PASS: TestAddons/parallel/LocalPath (58.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xfb74" [0890e46b-b717-401e-a098-3ee68502198f] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005184956s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-jrgjw" [38093317-e655-4947-b270-3903da22aee7] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005319221s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-106432 addons disable yakd --alsologtostderr -v=1: (5.75324639s)
--- PASS: TestAddons/parallel/Yakd (10.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-106432
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-106432: (1m30.974869815s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-106432
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-106432
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-106432
--- PASS: TestAddons/StoppedEnableDisable (91.27s)

                                                
                                    
x
+
TestCertOptions (62.25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-982566 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0203 11:34:00.131176  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-982566 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m0.980855997s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-982566 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-982566 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-982566 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-982566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-982566
--- PASS: TestCertOptions (62.25s)

                                                
                                    
x
+
TestCertExpiration (311.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-149645 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-149645 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m29.571939159s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-149645 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-149645 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.443109303s)
helpers_test.go:175: Cleaning up "cert-expiration-149645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-149645
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-149645: (1.728122258s)
--- PASS: TestCertExpiration (311.74s)

                                                
                                    
x
+
TestForceSystemdFlag (103.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-758006 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-758006 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m42.034450305s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-758006 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-758006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-758006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-758006: (1.019408107s)
--- PASS: TestForceSystemdFlag (103.26s)

                                                
                                    
x
+
TestForceSystemdEnv (44.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-544292 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-544292 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.841157608s)
helpers_test.go:175: Cleaning up "force-systemd-env-544292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-544292
--- PASS: TestForceSystemdEnv (44.63s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.9s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0203 11:35:18.450015  116606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0203 11:35:18.450267  116606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0203 11:35:18.481526  116606 install.go:62] docker-machine-driver-kvm2: exit status 1
W0203 11:35:18.481916  116606 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0203 11:35:18.482023  116606 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1153595302/001/docker-machine-driver-kvm2
I0203 11:35:18.712902  116606 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1153595302/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530d6a0 0x530d6a0 0x530d6a0 0x530d6a0 0x530d6a0 0x530d6a0 0x530d6a0] Decompressors:map[bz2:0xc00080b550 gz:0xc00080b558 tar:0xc00080b500 tar.bz2:0xc00080b510 tar.gz:0xc00080b520 tar.xz:0xc00080b530 tar.zst:0xc00080b540 tbz2:0xc00080b510 tgz:0xc00080b520 txz:0xc00080b530 tzst:0xc00080b540 xz:0xc00080b560 zip:0xc00080b570 zst:0xc00080b568] Getters:map[file:0xc0028c38c0 http:0xc000075ea0 https:0xc000075ef0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0203 11:35:18.712947  116606 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1153595302/001/docker-machine-driver-kvm2
I0203 11:35:20.627943  116606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0203 11:35:20.628048  116606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0203 11:35:20.658134  116606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0203 11:35:20.658166  116606 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0203 11:35:20.658250  116606 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0203 11:35:20.658280  116606 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1153595302/002/docker-machine-driver-kvm2
I0203 11:35:20.684558  116606 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1153595302/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530d6a0 0x530d6a0 0x530d6a0 0x530d6a0 0x530d6a0 0x530d6a0 0x530d6a0] Decompressors:map[bz2:0xc00080b550 gz:0xc00080b558 tar:0xc00080b500 tar.bz2:0xc00080b510 tar.gz:0xc00080b520 tar.xz:0xc00080b530 tar.zst:0xc00080b540 tbz2:0xc00080b510 tgz:0xc00080b520 txz:0xc00080b530 tzst:0xc00080b540 xz:0xc00080b560 zip:0xc00080b570 zst:0xc00080b568] Getters:map[file:0xc0005e93d0 http:0xc000d4f860 https:0xc000d4f8b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0203 11:35:20.684598  116606 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1153595302/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.90s)

                                                
                                    
x
+
TestErrorSpam/setup (42.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-537100 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-537100 --driver=kvm2  --container-runtime=crio
E0203 10:40:42.126253  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:40:42.132715  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:40:42.144121  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:40:42.165557  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:40:42.207032  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:40:42.288576  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:40:42.450170  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:40:42.771923  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:40:43.414019  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:40:44.695981  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:40:47.258513  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:40:52.381025  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:41:02.622841  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-537100 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-537100 --driver=kvm2  --container-runtime=crio: (42.607797715s)
--- PASS: TestErrorSpam/setup (42.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (6.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 stop: (2.281859205s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 stop: (1.992433995s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-537100 --log_dir /tmp/nospam-537100 stop: (1.732368228s)
--- PASS: TestErrorSpam/stop (6.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20354-109432/.minikube/files/etc/test/nested/copy/116606/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.41s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032338 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0203 10:41:23.104330  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:42:04.065674  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-032338 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.411874306s)
--- PASS: TestFunctional/serial/StartWithProxy (55.41s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0203 10:42:14.143874  116606 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032338 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-032338 --alsologtostderr -v=8: (54.919763836s)
functional_test.go:680: soft start took 54.920539178s for "functional-032338" cluster.
I0203 10:43:09.064047  116606 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (54.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-032338 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-032338 cache add registry.k8s.io/pause:3.1: (1.599915591s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-032338 cache add registry.k8s.io/pause:3.3: (1.654119222s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-032338 cache add registry.k8s.io/pause:latest: (1.57764386s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-032338 /tmp/TestFunctionalserialCacheCmdcacheadd_local4244451207/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 cache add minikube-local-cache-test:functional-032338
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-032338 cache add minikube-local-cache-test:functional-032338: (2.231896096s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 cache delete minikube-local-cache-test:functional-032338
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-032338
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032338 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (223.62627ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-032338 cache reload: (1.450831735s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 kubectl -- --context functional-032338 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-032338 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032338 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0203 10:43:25.988032  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-032338 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.109589209s)
functional_test.go:778: restart took 33.109709266s for "functional-032338" cluster.
I0203 10:43:52.520856  116606 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (33.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-032338 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-032338 logs: (1.338046503s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 logs --file /tmp/TestFunctionalserialLogsFileCmd739813434/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-032338 logs --file /tmp/TestFunctionalserialLogsFileCmd739813434/001/logs.txt: (1.364060158s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.2s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-032338 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-032338
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-032338: exit status 115 (280.695892ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.158:30250 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-032338 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032338 config get cpus: exit status 14 (56.266836ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032338 config get cpus: exit status 14 (63.320699ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-032338 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-032338 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 124910: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032338 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-032338 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (155.527227ms)

                                                
                                                
-- stdout --
	* [functional-032338] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 10:44:14.607834  124724 out.go:345] Setting OutFile to fd 1 ...
	I0203 10:44:14.607932  124724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:44:14.607941  124724 out.go:358] Setting ErrFile to fd 2...
	I0203 10:44:14.607945  124724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:44:14.608123  124724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 10:44:14.608685  124724 out.go:352] Setting JSON to false
	I0203 10:44:14.609662  124724 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5197,"bootTime":1738574258,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 10:44:14.609732  124724 start.go:139] virtualization: kvm guest
	I0203 10:44:14.611923  124724 out.go:177] * [functional-032338] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 10:44:14.613333  124724 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 10:44:14.613326  124724 notify.go:220] Checking for updates...
	I0203 10:44:14.616378  124724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 10:44:14.617696  124724 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 10:44:14.618855  124724 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 10:44:14.620160  124724 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 10:44:14.621496  124724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 10:44:14.623310  124724 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 10:44:14.623937  124724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:44:14.624021  124724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:44:14.640810  124724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0203 10:44:14.641377  124724 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:44:14.642021  124724 main.go:141] libmachine: Using API Version  1
	I0203 10:44:14.642043  124724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:44:14.642417  124724 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:44:14.642666  124724 main.go:141] libmachine: (functional-032338) Calling .DriverName
	I0203 10:44:14.642947  124724 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 10:44:14.643270  124724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:44:14.643327  124724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:44:14.659825  124724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35831
	I0203 10:44:14.660309  124724 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:44:14.660933  124724 main.go:141] libmachine: Using API Version  1
	I0203 10:44:14.660963  124724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:44:14.661306  124724 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:44:14.661520  124724 main.go:141] libmachine: (functional-032338) Calling .DriverName
	I0203 10:44:14.698180  124724 out.go:177] * Using the kvm2 driver based on existing profile
	I0203 10:44:14.699445  124724 start.go:297] selected driver: kvm2
	I0203 10:44:14.699461  124724 start.go:901] validating driver "kvm2" against &{Name:functional-032338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-032338 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 10:44:14.699577  124724 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 10:44:14.701887  124724 out.go:201] 
	W0203 10:44:14.703050  124724 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0203 10:44:14.704174  124724 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032338 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032338 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-032338 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (159.181948ms)

                                                
                                                
-- stdout --
	* [functional-032338] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 10:44:14.931630  124810 out.go:345] Setting OutFile to fd 1 ...
	I0203 10:44:14.931807  124810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:44:14.931825  124810 out.go:358] Setting ErrFile to fd 2...
	I0203 10:44:14.931831  124810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:44:14.932263  124810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 10:44:14.933077  124810 out.go:352] Setting JSON to false
	I0203 10:44:14.934464  124810 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5197,"bootTime":1738574258,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 10:44:14.934609  124810 start.go:139] virtualization: kvm guest
	I0203 10:44:14.936717  124810 out.go:177] * [functional-032338] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0203 10:44:14.938263  124810 notify.go:220] Checking for updates...
	I0203 10:44:14.938274  124810 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 10:44:14.939535  124810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 10:44:14.940703  124810 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 10:44:14.941870  124810 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 10:44:14.942912  124810 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 10:44:14.944359  124810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 10:44:14.945933  124810 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 10:44:14.946516  124810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:44:14.946582  124810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:44:14.962155  124810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35047
	I0203 10:44:14.962674  124810 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:44:14.963316  124810 main.go:141] libmachine: Using API Version  1
	I0203 10:44:14.963360  124810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:44:14.963701  124810 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:44:14.963924  124810 main.go:141] libmachine: (functional-032338) Calling .DriverName
	I0203 10:44:14.964193  124810 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 10:44:14.964487  124810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:44:14.964544  124810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:44:14.979756  124810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45841
	I0203 10:44:14.980279  124810 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:44:14.980783  124810 main.go:141] libmachine: Using API Version  1
	I0203 10:44:14.980821  124810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:44:14.981179  124810 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:44:14.981336  124810 main.go:141] libmachine: (functional-032338) Calling .DriverName
	I0203 10:44:15.016065  124810 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0203 10:44:15.017150  124810 start.go:297] selected driver: kvm2
	I0203 10:44:15.017169  124810 start.go:901] validating driver "kvm2" against &{Name:functional-032338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-032338 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 10:44:15.017319  124810 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 10:44:15.019312  124810 out.go:201] 
	W0203 10:44:15.020533  124810 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0203 10:44:15.021858  124810 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-032338 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-032338 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-82f9f" [0c8d6ffb-c72c-436f-8a3d-24b51ab51621] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-82f9f" [0c8d6ffb-c72c-436f-8a3d-24b51ab51621] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003320033s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.158:32491
functional_test.go:1692: http://192.168.39.158:32491: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-82f9f

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.158:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.158:32491
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5e8fd048-cfaa-4ff4-81b9-2291f0d2e983] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003859855s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-032338 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-032338 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-032338 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-032338 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-032338 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [581a0126-2e3d-4ef3-9e04-d15b24def0b8] Pending
helpers_test.go:344: "sp-pod" [581a0126-2e3d-4ef3-9e04-d15b24def0b8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [581a0126-2e3d-4ef3-9e04-d15b24def0b8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003613556s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-032338 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-032338 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-032338 delete -f testdata/storage-provisioner/pod.yaml: (5.305390804s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-032338 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3e2cbbac-efb7-4650-a50b-e5f52a8bdbd0] Pending
helpers_test.go:344: "sp-pod" [3e2cbbac-efb7-4650-a50b-e5f52a8bdbd0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3e2cbbac-efb7-4650-a50b-e5f52a8bdbd0] Running
2025/02/03 10:44:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.003063115s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-032338 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh -n functional-032338 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 cp functional-032338:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4237071643/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh -n functional-032338 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh -n functional-032338 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-032338 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-zpkxz" [c989bdcb-99f6-42ba-bfd2-bfa02a23756a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-zpkxz" [c989bdcb-99f6-42ba-bfd2-bfa02a23756a] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.0095652s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-032338 exec mysql-58ccfd96bb-zpkxz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/116606/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "sudo cat /etc/test/nested/copy/116606/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/116606.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "sudo cat /etc/ssl/certs/116606.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/116606.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "sudo cat /usr/share/ca-certificates/116606.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/1166062.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "sudo cat /etc/ssl/certs/1166062.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/1166062.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "sudo cat /usr/share/ca-certificates/1166062.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-032338 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032338 ssh "sudo systemctl is-active docker": exit status 1 (228.274127ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032338 ssh "sudo systemctl is-active containerd": exit status 1 (230.126982ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2305: (dbg) Done: out/minikube-linux-amd64 license: (1.553645991s)
--- PASS: TestFunctional/parallel/License (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032338 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-032338
localhost/kicbase/echo-server:functional-032338
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032338 image ls --format short --alsologtostderr:
I0203 10:44:35.737562  125170 out.go:345] Setting OutFile to fd 1 ...
I0203 10:44:35.737736  125170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:44:35.737751  125170 out.go:358] Setting ErrFile to fd 2...
I0203 10:44:35.737759  125170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:44:35.738186  125170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
I0203 10:44:35.739334  125170 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 10:44:35.739510  125170 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 10:44:35.739928  125170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0203 10:44:35.740074  125170 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 10:44:35.755874  125170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34795
I0203 10:44:35.756487  125170 main.go:141] libmachine: () Calling .GetVersion
I0203 10:44:35.757096  125170 main.go:141] libmachine: Using API Version  1
I0203 10:44:35.757124  125170 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 10:44:35.757502  125170 main.go:141] libmachine: () Calling .GetMachineName
I0203 10:44:35.757707  125170 main.go:141] libmachine: (functional-032338) Calling .GetState
I0203 10:44:35.759694  125170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0203 10:44:35.759733  125170 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 10:44:35.774464  125170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
I0203 10:44:35.774858  125170 main.go:141] libmachine: () Calling .GetVersion
I0203 10:44:35.775373  125170 main.go:141] libmachine: Using API Version  1
I0203 10:44:35.775396  125170 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 10:44:35.775690  125170 main.go:141] libmachine: () Calling .GetMachineName
I0203 10:44:35.775894  125170 main.go:141] libmachine: (functional-032338) Calling .DriverName
I0203 10:44:35.776088  125170 ssh_runner.go:195] Run: systemctl --version
I0203 10:44:35.776113  125170 main.go:141] libmachine: (functional-032338) Calling .GetSSHHostname
I0203 10:44:35.779103  125170 main.go:141] libmachine: (functional-032338) DBG | domain functional-032338 has defined MAC address 52:54:00:53:ef:e6 in network mk-functional-032338
I0203 10:44:35.779481  125170 main.go:141] libmachine: (functional-032338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:ef:e6", ip: ""} in network mk-functional-032338: {Iface:virbr1 ExpiryTime:2025-02-03 11:41:33 +0000 UTC Type:0 Mac:52:54:00:53:ef:e6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-032338 Clientid:01:52:54:00:53:ef:e6}
I0203 10:44:35.779504  125170 main.go:141] libmachine: (functional-032338) DBG | domain functional-032338 has defined IP address 192.168.39.158 and MAC address 52:54:00:53:ef:e6 in network mk-functional-032338
I0203 10:44:35.779719  125170 main.go:141] libmachine: (functional-032338) Calling .GetSSHPort
I0203 10:44:35.779925  125170 main.go:141] libmachine: (functional-032338) Calling .GetSSHKeyPath
I0203 10:44:35.780092  125170 main.go:141] libmachine: (functional-032338) Calling .GetSSHUsername
I0203 10:44:35.780275  125170 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/functional-032338/id_rsa Username:docker}
I0203 10:44:35.875048  125170 ssh_runner.go:195] Run: sudo crictl images --output json
I0203 10:44:35.924795  125170 main.go:141] libmachine: Making call to close driver server
I0203 10:44:35.924810  125170 main.go:141] libmachine: (functional-032338) Calling .Close
I0203 10:44:35.925131  125170 main.go:141] libmachine: Successfully made call to close driver server
I0203 10:44:35.925148  125170 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 10:44:35.925162  125170 main.go:141] libmachine: Making call to close driver server
I0203 10:44:35.925169  125170 main.go:141] libmachine: (functional-032338) Calling .Close
I0203 10:44:35.925396  125170 main.go:141] libmachine: Successfully made call to close driver server
I0203 10:44:35.925424  125170 main.go:141] libmachine: (functional-032338) DBG | Closing plugin on server side
I0203 10:44:35.925452  125170 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032338 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| docker.io/library/nginx                 | latest             | 9bea9f2796e23 | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-032338  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-032338  | 2bb99143b2dc1 | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032338 image ls --format table --alsologtostderr:
I0203 10:44:39.304306  125604 out.go:345] Setting OutFile to fd 1 ...
I0203 10:44:39.304418  125604 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:44:39.304429  125604 out.go:358] Setting ErrFile to fd 2...
I0203 10:44:39.304433  125604 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:44:39.304649  125604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
I0203 10:44:39.305260  125604 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 10:44:39.305354  125604 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 10:44:39.305710  125604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0203 10:44:39.305782  125604 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 10:44:39.320916  125604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35685
I0203 10:44:39.321372  125604 main.go:141] libmachine: () Calling .GetVersion
I0203 10:44:39.321920  125604 main.go:141] libmachine: Using API Version  1
I0203 10:44:39.321946  125604 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 10:44:39.322269  125604 main.go:141] libmachine: () Calling .GetMachineName
I0203 10:44:39.322466  125604 main.go:141] libmachine: (functional-032338) Calling .GetState
I0203 10:44:39.324066  125604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0203 10:44:39.324110  125604 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 10:44:39.338905  125604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33805
I0203 10:44:39.339444  125604 main.go:141] libmachine: () Calling .GetVersion
I0203 10:44:39.339929  125604 main.go:141] libmachine: Using API Version  1
I0203 10:44:39.339951  125604 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 10:44:39.340312  125604 main.go:141] libmachine: () Calling .GetMachineName
I0203 10:44:39.340534  125604 main.go:141] libmachine: (functional-032338) Calling .DriverName
I0203 10:44:39.340720  125604 ssh_runner.go:195] Run: systemctl --version
I0203 10:44:39.340743  125604 main.go:141] libmachine: (functional-032338) Calling .GetSSHHostname
I0203 10:44:39.343374  125604 main.go:141] libmachine: (functional-032338) DBG | domain functional-032338 has defined MAC address 52:54:00:53:ef:e6 in network mk-functional-032338
I0203 10:44:39.343908  125604 main.go:141] libmachine: (functional-032338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:ef:e6", ip: ""} in network mk-functional-032338: {Iface:virbr1 ExpiryTime:2025-02-03 11:41:33 +0000 UTC Type:0 Mac:52:54:00:53:ef:e6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-032338 Clientid:01:52:54:00:53:ef:e6}
I0203 10:44:39.343944  125604 main.go:141] libmachine: (functional-032338) DBG | domain functional-032338 has defined IP address 192.168.39.158 and MAC address 52:54:00:53:ef:e6 in network mk-functional-032338
I0203 10:44:39.344110  125604 main.go:141] libmachine: (functional-032338) Calling .GetSSHPort
I0203 10:44:39.344308  125604 main.go:141] libmachine: (functional-032338) Calling .GetSSHKeyPath
I0203 10:44:39.344485  125604 main.go:141] libmachine: (functional-032338) Calling .GetSSHUsername
I0203 10:44:39.344677  125604 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/functional-032338/id_rsa Username:docker}
I0203 10:44:39.444595  125604 ssh_runner.go:195] Run: sudo crictl images --output json
I0203 10:44:39.495354  125604 main.go:141] libmachine: Making call to close driver server
I0203 10:44:39.495376  125604 main.go:141] libmachine: (functional-032338) Calling .Close
I0203 10:44:39.495625  125604 main.go:141] libmachine: Successfully made call to close driver server
I0203 10:44:39.495643  125604 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 10:44:39.495658  125604 main.go:141] libmachine: Making call to close driver server
I0203 10:44:39.495665  125604 main.go:141] libmachine: (functional-032338) Calling .Close
I0203 10:44:39.495669  125604 main.go:141] libmachine: (functional-032338) DBG | Closing plugin on server side
I0203 10:44:39.495872  125604 main.go:141] libmachine: Successfully made call to close driver server
I0203 10:44:39.495910  125604 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 10:44:39.495913  125604 main.go:141] libmachine: (functional-032338) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032338 image ls --format json --alsologtostderr:
[{"id":"2bb99143b2dc1c62b246bf760513721f070a9768b9546d5333ca53a29e1dced2","repoDigests":["localhost/minikube-local-cache-test@sha256:b1c29174ca90c1a0c8f2d7f357761a184703b1bf1e1fee0ae5ff4c9280c6c1e9"],"repoTags":["localhost/minikube-local-cache-test:functional-032338"],"size":"3328"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"019ee182b58e2
0da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015
513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9"],"repoTags":["docker.io/library/nginx:latest"],"size":"195872148"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha25
6:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-032338"],"size":"4943877"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":[],"size":"1462480"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995
460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e0
5511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc"
,"repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032338 image ls --format json --alsologtostderr:
I0203 10:44:38.968432  125580 out.go:345] Setting OutFile to fd 1 ...
I0203 10:44:38.968543  125580 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:44:38.968555  125580 out.go:358] Setting ErrFile to fd 2...
I0203 10:44:38.968562  125580 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:44:38.968770  125580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
I0203 10:44:38.969392  125580 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 10:44:38.969501  125580 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 10:44:38.969839  125580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0203 10:44:38.969915  125580 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 10:44:38.985618  125580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
I0203 10:44:38.986177  125580 main.go:141] libmachine: () Calling .GetVersion
I0203 10:44:38.986855  125580 main.go:141] libmachine: Using API Version  1
I0203 10:44:38.986884  125580 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 10:44:38.987303  125580 main.go:141] libmachine: () Calling .GetMachineName
I0203 10:44:38.987496  125580 main.go:141] libmachine: (functional-032338) Calling .GetState
I0203 10:44:38.989656  125580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0203 10:44:38.989708  125580 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 10:44:39.004542  125580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42635
I0203 10:44:39.005032  125580 main.go:141] libmachine: () Calling .GetVersion
I0203 10:44:39.005651  125580 main.go:141] libmachine: Using API Version  1
I0203 10:44:39.005686  125580 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 10:44:39.006039  125580 main.go:141] libmachine: () Calling .GetMachineName
I0203 10:44:39.006247  125580 main.go:141] libmachine: (functional-032338) Calling .DriverName
I0203 10:44:39.006435  125580 ssh_runner.go:195] Run: systemctl --version
I0203 10:44:39.006462  125580 main.go:141] libmachine: (functional-032338) Calling .GetSSHHostname
I0203 10:44:39.009425  125580 main.go:141] libmachine: (functional-032338) DBG | domain functional-032338 has defined MAC address 52:54:00:53:ef:e6 in network mk-functional-032338
I0203 10:44:39.009946  125580 main.go:141] libmachine: (functional-032338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:ef:e6", ip: ""} in network mk-functional-032338: {Iface:virbr1 ExpiryTime:2025-02-03 11:41:33 +0000 UTC Type:0 Mac:52:54:00:53:ef:e6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-032338 Clientid:01:52:54:00:53:ef:e6}
I0203 10:44:39.009982  125580 main.go:141] libmachine: (functional-032338) DBG | domain functional-032338 has defined IP address 192.168.39.158 and MAC address 52:54:00:53:ef:e6 in network mk-functional-032338
I0203 10:44:39.010076  125580 main.go:141] libmachine: (functional-032338) Calling .GetSSHPort
I0203 10:44:39.010261  125580 main.go:141] libmachine: (functional-032338) Calling .GetSSHKeyPath
I0203 10:44:39.010410  125580 main.go:141] libmachine: (functional-032338) Calling .GetSSHUsername
I0203 10:44:39.010541  125580 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/functional-032338/id_rsa Username:docker}
I0203 10:44:39.107096  125580 ssh_runner.go:195] Run: sudo crictl images --output json
I0203 10:44:39.251634  125580 main.go:141] libmachine: Making call to close driver server
I0203 10:44:39.251653  125580 main.go:141] libmachine: (functional-032338) Calling .Close
I0203 10:44:39.251959  125580 main.go:141] libmachine: (functional-032338) DBG | Closing plugin on server side
I0203 10:44:39.252002  125580 main.go:141] libmachine: Successfully made call to close driver server
I0203 10:44:39.252019  125580 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 10:44:39.252033  125580 main.go:141] libmachine: Making call to close driver server
I0203 10:44:39.252044  125580 main.go:141] libmachine: (functional-032338) Calling .Close
I0203 10:44:39.252269  125580 main.go:141] libmachine: Successfully made call to close driver server
I0203 10:44:39.252294  125580 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 10:44:39.252321  125580 main.go:141] libmachine: (functional-032338) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032338 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9
repoTags:
- docker.io/library/nginx:latest
size: "195872148"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-032338
size: "4943877"
- id: 2bb99143b2dc1c62b246bf760513721f070a9768b9546d5333ca53a29e1dced2
repoDigests:
- localhost/minikube-local-cache-test@sha256:b1c29174ca90c1a0c8f2d7f357761a184703b1bf1e1fee0ae5ff4c9280c6c1e9
repoTags:
- localhost/minikube-local-cache-test:functional-032338
size: "3328"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032338 image ls --format yaml --alsologtostderr:
I0203 10:44:35.975742  125194 out.go:345] Setting OutFile to fd 1 ...
I0203 10:44:35.975839  125194 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:44:35.975850  125194 out.go:358] Setting ErrFile to fd 2...
I0203 10:44:35.975854  125194 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:44:35.976065  125194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
I0203 10:44:35.976681  125194 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 10:44:35.976780  125194 config.go:182] Loaded profile config "functional-032338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0203 10:44:35.977175  125194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0203 10:44:35.977230  125194 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 10:44:35.993620  125194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
I0203 10:44:35.994100  125194 main.go:141] libmachine: () Calling .GetVersion
I0203 10:44:35.994658  125194 main.go:141] libmachine: Using API Version  1
I0203 10:44:35.994688  125194 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 10:44:35.995034  125194 main.go:141] libmachine: () Calling .GetMachineName
I0203 10:44:35.995256  125194 main.go:141] libmachine: (functional-032338) Calling .GetState
I0203 10:44:35.997103  125194 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0203 10:44:35.997141  125194 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 10:44:36.011681  125194 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
I0203 10:44:36.012076  125194 main.go:141] libmachine: () Calling .GetVersion
I0203 10:44:36.012512  125194 main.go:141] libmachine: Using API Version  1
I0203 10:44:36.012534  125194 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 10:44:36.012826  125194 main.go:141] libmachine: () Calling .GetMachineName
I0203 10:44:36.013039  125194 main.go:141] libmachine: (functional-032338) Calling .DriverName
I0203 10:44:36.013243  125194 ssh_runner.go:195] Run: systemctl --version
I0203 10:44:36.013270  125194 main.go:141] libmachine: (functional-032338) Calling .GetSSHHostname
I0203 10:44:36.015759  125194 main.go:141] libmachine: (functional-032338) DBG | domain functional-032338 has defined MAC address 52:54:00:53:ef:e6 in network mk-functional-032338
I0203 10:44:36.016201  125194 main.go:141] libmachine: (functional-032338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:ef:e6", ip: ""} in network mk-functional-032338: {Iface:virbr1 ExpiryTime:2025-02-03 11:41:33 +0000 UTC Type:0 Mac:52:54:00:53:ef:e6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-032338 Clientid:01:52:54:00:53:ef:e6}
I0203 10:44:36.016229  125194 main.go:141] libmachine: (functional-032338) DBG | domain functional-032338 has defined IP address 192.168.39.158 and MAC address 52:54:00:53:ef:e6 in network mk-functional-032338
I0203 10:44:36.016320  125194 main.go:141] libmachine: (functional-032338) Calling .GetSSHPort
I0203 10:44:36.016502  125194 main.go:141] libmachine: (functional-032338) Calling .GetSSHKeyPath
I0203 10:44:36.016634  125194 main.go:141] libmachine: (functional-032338) Calling .GetSSHUsername
I0203 10:44:36.016766  125194 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/functional-032338/id_rsa Username:docker}
I0203 10:44:36.100356  125194 ssh_runner.go:195] Run: sudo crictl images --output json
I0203 10:44:36.143445  125194 main.go:141] libmachine: Making call to close driver server
I0203 10:44:36.143459  125194 main.go:141] libmachine: (functional-032338) Calling .Close
I0203 10:44:36.143729  125194 main.go:141] libmachine: Successfully made call to close driver server
I0203 10:44:36.143762  125194 main.go:141] libmachine: (functional-032338) DBG | Closing plugin on server side
I0203 10:44:36.143795  125194 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 10:44:36.143817  125194 main.go:141] libmachine: Making call to close driver server
I0203 10:44:36.143829  125194 main.go:141] libmachine: (functional-032338) Calling .Close
I0203 10:44:36.144022  125194 main.go:141] libmachine: Successfully made call to close driver server
I0203 10:44:36.144040  125194 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 10:44:36.144057  125194 main.go:141] libmachine: (functional-032338) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.715418897s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-032338
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-032338 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-032338 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-57g6n" [9b809292-b5e0-4207-9afc-16d934ae08fa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-57g6n" [9b809292-b5e0-4207-9afc-16d934ae08fa] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004677658s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image load --daemon kicbase/echo-server:functional-032338 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-032338 image load --daemon kicbase/echo-server:functional-032338 --alsologtostderr: (3.352821974s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image load --daemon kicbase/echo-server:functional-032338 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-032338
I0203 10:44:06.546391  116606 retry.go:31] will retry after 2.89361293s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:f5a40708-d75a-4218-b34f-4d4f38852a08 ResourceVersion:785 Generation:0 CreationTimestamp:2025-02-03 10:44:06 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-f5a40708-d75a-4218-b34f-4d4f38852a08 StorageClassName:0xc0008e2250 VolumeMode:0xc0008e2260 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image load --daemon kicbase/echo-server:functional-032338 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image save kicbase/echo-server:functional-032338 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image rm kicbase/echo-server:functional-032338 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-032338
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 image save --daemon kicbase/echo-server:functional-032338 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-032338
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 service list -o json
functional_test.go:1511: Took "449.635395ms" to run "out/minikube-linux-amd64 -p functional-032338 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.158:32531
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.158:32531
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (22.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032338 /tmp/TestFunctionalparallelMountCmdany-port4234618319/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1738579453301460515" to /tmp/TestFunctionalparallelMountCmdany-port4234618319/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1738579453301460515" to /tmp/TestFunctionalparallelMountCmdany-port4234618319/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1738579453301460515" to /tmp/TestFunctionalparallelMountCmdany-port4234618319/001/test-1738579453301460515
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032338 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.93407ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0203 10:44:13.584755  116606 retry.go:31] will retry after 632.183757ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb  3 10:44 created-by-test
-rw-r--r-- 1 docker docker 24 Feb  3 10:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb  3 10:44 test-1738579453301460515
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh cat /mount-9p/test-1738579453301460515
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-032338 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [92d6a8ea-c640-40b8-83f6-622dfce61d69] Pending
helpers_test.go:344: "busybox-mount" [92d6a8ea-c640-40b8-83f6-622dfce61d69] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [92d6a8ea-c640-40b8-83f6-622dfce61d69] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [92d6a8ea-c640-40b8-83f6-622dfce61d69] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 19.004157891s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-032338 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032338 /tmp/TestFunctionalparallelMountCmdany-port4234618319/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (22.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "413.001645ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "51.788832ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "400.573279ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "64.055599ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032338 /tmp/TestFunctionalparallelMountCmdspecific-port51610076/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032338 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (248.949541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0203 10:44:35.620010  116606 retry.go:31] will retry after 677.178772ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032338 /tmp/TestFunctionalparallelMountCmdspecific-port51610076/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032338 ssh "sudo umount -f /mount-9p": exit status 1 (198.475034ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-032338 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032338 /tmp/TestFunctionalparallelMountCmdspecific-port51610076/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032338 /tmp/TestFunctionalparallelMountCmdVerifyCleanup845690645/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032338 /tmp/TestFunctionalparallelMountCmdVerifyCleanup845690645/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032338 /tmp/TestFunctionalparallelMountCmdVerifyCleanup845690645/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032338 ssh "findmnt -T" /mount1: exit status 1 (252.541264ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0203 10:44:37.570909  116606 retry.go:31] will retry after 478.341418ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-032338 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-032338 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032338 /tmp/TestFunctionalparallelMountCmdVerifyCleanup845690645/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032338 /tmp/TestFunctionalparallelMountCmdVerifyCleanup845690645/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032338 /tmp/TestFunctionalparallelMountCmdVerifyCleanup845690645/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-032338
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-032338
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-032338
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (198.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-063873 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0203 10:45:42.117447  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:46:09.829605  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-063873 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m17.701987845s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (198.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-063873 -- rollout status deployment/busybox: (5.966870822s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-4hqdq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-sdw76 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-vtkg6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-4hqdq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-sdw76 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-vtkg6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-4hqdq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-sdw76 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-vtkg6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-4hqdq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-4hqdq -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-sdw76 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-sdw76 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-vtkg6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-063873 -- exec busybox-58667487b6-vtkg6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-063873 -v=7 --alsologtostderr
E0203 10:49:00.130481  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:49:00.136990  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:49:00.148467  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:49:00.169958  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:49:00.211499  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:49:00.293006  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:49:00.454947  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:49:00.776748  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:49:01.418602  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:49:02.699987  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:49:05.262338  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:49:10.384471  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-063873 -v=7 --alsologtostderr: (53.98365995s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-063873 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp testdata/cp-test.txt ha-063873:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3534844800/001/cp-test_ha-063873.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873:/home/docker/cp-test.txt ha-063873-m02:/home/docker/cp-test_ha-063873_ha-063873-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m02 "sudo cat /home/docker/cp-test_ha-063873_ha-063873-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873:/home/docker/cp-test.txt ha-063873-m03:/home/docker/cp-test_ha-063873_ha-063873-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m03 "sudo cat /home/docker/cp-test_ha-063873_ha-063873-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873:/home/docker/cp-test.txt ha-063873-m04:/home/docker/cp-test_ha-063873_ha-063873-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m04 "sudo cat /home/docker/cp-test_ha-063873_ha-063873-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp testdata/cp-test.txt ha-063873-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3534844800/001/cp-test_ha-063873-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m02 "sudo cat /home/docker/cp-test.txt"
E0203 10:49:20.626733  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873-m02:/home/docker/cp-test.txt ha-063873:/home/docker/cp-test_ha-063873-m02_ha-063873.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873 "sudo cat /home/docker/cp-test_ha-063873-m02_ha-063873.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873-m02:/home/docker/cp-test.txt ha-063873-m03:/home/docker/cp-test_ha-063873-m02_ha-063873-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m03 "sudo cat /home/docker/cp-test_ha-063873-m02_ha-063873-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873-m02:/home/docker/cp-test.txt ha-063873-m04:/home/docker/cp-test_ha-063873-m02_ha-063873-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m04 "sudo cat /home/docker/cp-test_ha-063873-m02_ha-063873-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp testdata/cp-test.txt ha-063873-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3534844800/001/cp-test_ha-063873-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873-m03:/home/docker/cp-test.txt ha-063873:/home/docker/cp-test_ha-063873-m03_ha-063873.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873 "sudo cat /home/docker/cp-test_ha-063873-m03_ha-063873.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873-m03:/home/docker/cp-test.txt ha-063873-m02:/home/docker/cp-test_ha-063873-m03_ha-063873-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m02 "sudo cat /home/docker/cp-test_ha-063873-m03_ha-063873-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873-m03:/home/docker/cp-test.txt ha-063873-m04:/home/docker/cp-test_ha-063873-m03_ha-063873-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m04 "sudo cat /home/docker/cp-test_ha-063873-m03_ha-063873-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp testdata/cp-test.txt ha-063873-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3534844800/001/cp-test_ha-063873-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873-m04:/home/docker/cp-test.txt ha-063873:/home/docker/cp-test_ha-063873-m04_ha-063873.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873 "sudo cat /home/docker/cp-test_ha-063873-m04_ha-063873.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873-m04:/home/docker/cp-test.txt ha-063873-m02:/home/docker/cp-test_ha-063873-m04_ha-063873-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m02 "sudo cat /home/docker/cp-test_ha-063873-m04_ha-063873-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 cp ha-063873-m04:/home/docker/cp-test.txt ha-063873-m03:/home/docker/cp-test_ha-063873-m04_ha-063873-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 ssh -n ha-063873-m03 "sudo cat /home/docker/cp-test_ha-063873-m04_ha-063873-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 node stop m02 -v=7 --alsologtostderr
E0203 10:49:41.108607  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:50:22.069974  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:50:42.117527  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-063873 node stop m02 -v=7 --alsologtostderr: (1m31.007685675s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-063873 status -v=7 --alsologtostderr: exit status 7 (663.989336ms)

                                                
                                                
-- stdout --
	ha-063873
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-063873-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-063873-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-063873-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 10:51:00.125620  130347 out.go:345] Setting OutFile to fd 1 ...
	I0203 10:51:00.125737  130347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:51:00.125750  130347 out.go:358] Setting ErrFile to fd 2...
	I0203 10:51:00.125754  130347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:51:00.125941  130347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 10:51:00.126161  130347 out.go:352] Setting JSON to false
	I0203 10:51:00.126197  130347 mustload.go:65] Loading cluster: ha-063873
	I0203 10:51:00.126228  130347 notify.go:220] Checking for updates...
	I0203 10:51:00.126586  130347 config.go:182] Loaded profile config "ha-063873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 10:51:00.126605  130347 status.go:174] checking status of ha-063873 ...
	I0203 10:51:00.126987  130347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:51:00.127028  130347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:51:00.143095  130347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I0203 10:51:00.143605  130347 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:51:00.144394  130347 main.go:141] libmachine: Using API Version  1
	I0203 10:51:00.144430  130347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:51:00.144753  130347 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:51:00.144943  130347 main.go:141] libmachine: (ha-063873) Calling .GetState
	I0203 10:51:00.146624  130347 status.go:371] ha-063873 host status = "Running" (err=<nil>)
	I0203 10:51:00.146648  130347 host.go:66] Checking if "ha-063873" exists ...
	I0203 10:51:00.146936  130347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:51:00.146980  130347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:51:00.161654  130347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0203 10:51:00.162205  130347 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:51:00.162752  130347 main.go:141] libmachine: Using API Version  1
	I0203 10:51:00.162776  130347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:51:00.163119  130347 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:51:00.163307  130347 main.go:141] libmachine: (ha-063873) Calling .GetIP
	I0203 10:51:00.166737  130347 main.go:141] libmachine: (ha-063873) DBG | domain ha-063873 has defined MAC address 52:54:00:1b:16:9f in network mk-ha-063873
	I0203 10:51:00.167720  130347 main.go:141] libmachine: (ha-063873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:16:9f", ip: ""} in network mk-ha-063873: {Iface:virbr1 ExpiryTime:2025-02-03 11:45:07 +0000 UTC Type:0 Mac:52:54:00:1b:16:9f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-063873 Clientid:01:52:54:00:1b:16:9f}
	I0203 10:51:00.167751  130347 main.go:141] libmachine: (ha-063873) DBG | domain ha-063873 has defined IP address 192.168.39.80 and MAC address 52:54:00:1b:16:9f in network mk-ha-063873
	I0203 10:51:00.167991  130347 host.go:66] Checking if "ha-063873" exists ...
	I0203 10:51:00.168349  130347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:51:00.168397  130347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:51:00.184251  130347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41945
	I0203 10:51:00.184653  130347 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:51:00.185110  130347 main.go:141] libmachine: Using API Version  1
	I0203 10:51:00.185134  130347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:51:00.185427  130347 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:51:00.185614  130347 main.go:141] libmachine: (ha-063873) Calling .DriverName
	I0203 10:51:00.185777  130347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 10:51:00.185798  130347 main.go:141] libmachine: (ha-063873) Calling .GetSSHHostname
	I0203 10:51:00.188917  130347 main.go:141] libmachine: (ha-063873) DBG | domain ha-063873 has defined MAC address 52:54:00:1b:16:9f in network mk-ha-063873
	I0203 10:51:00.189344  130347 main.go:141] libmachine: (ha-063873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:16:9f", ip: ""} in network mk-ha-063873: {Iface:virbr1 ExpiryTime:2025-02-03 11:45:07 +0000 UTC Type:0 Mac:52:54:00:1b:16:9f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-063873 Clientid:01:52:54:00:1b:16:9f}
	I0203 10:51:00.189430  130347 main.go:141] libmachine: (ha-063873) DBG | domain ha-063873 has defined IP address 192.168.39.80 and MAC address 52:54:00:1b:16:9f in network mk-ha-063873
	I0203 10:51:00.189530  130347 main.go:141] libmachine: (ha-063873) Calling .GetSSHPort
	I0203 10:51:00.189791  130347 main.go:141] libmachine: (ha-063873) Calling .GetSSHKeyPath
	I0203 10:51:00.189955  130347 main.go:141] libmachine: (ha-063873) Calling .GetSSHUsername
	I0203 10:51:00.190131  130347 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/ha-063873/id_rsa Username:docker}
	I0203 10:51:00.278268  130347 ssh_runner.go:195] Run: systemctl --version
	I0203 10:51:00.284610  130347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 10:51:00.300495  130347 kubeconfig.go:125] found "ha-063873" server: "https://192.168.39.254:8443"
	I0203 10:51:00.300537  130347 api_server.go:166] Checking apiserver status ...
	I0203 10:51:00.300582  130347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 10:51:00.322331  130347 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1111/cgroup
	W0203 10:51:00.333082  130347 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1111/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0203 10:51:00.333142  130347 ssh_runner.go:195] Run: ls
	I0203 10:51:00.339041  130347 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0203 10:51:00.346866  130347 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0203 10:51:00.346909  130347 status.go:463] ha-063873 apiserver status = Running (err=<nil>)
	I0203 10:51:00.346921  130347 status.go:176] ha-063873 status: &{Name:ha-063873 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 10:51:00.346948  130347 status.go:174] checking status of ha-063873-m02 ...
	I0203 10:51:00.347408  130347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:51:00.347466  130347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:51:00.362411  130347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36373
	I0203 10:51:00.362866  130347 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:51:00.363509  130347 main.go:141] libmachine: Using API Version  1
	I0203 10:51:00.363540  130347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:51:00.363981  130347 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:51:00.364180  130347 main.go:141] libmachine: (ha-063873-m02) Calling .GetState
	I0203 10:51:00.365691  130347 status.go:371] ha-063873-m02 host status = "Stopped" (err=<nil>)
	I0203 10:51:00.365706  130347 status.go:384] host is not running, skipping remaining checks
	I0203 10:51:00.365712  130347 status.go:176] ha-063873-m02 status: &{Name:ha-063873-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 10:51:00.365730  130347 status.go:174] checking status of ha-063873-m03 ...
	I0203 10:51:00.366040  130347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:51:00.366098  130347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:51:00.382182  130347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39585
	I0203 10:51:00.383596  130347 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:51:00.384206  130347 main.go:141] libmachine: Using API Version  1
	I0203 10:51:00.384233  130347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:51:00.384607  130347 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:51:00.384830  130347 main.go:141] libmachine: (ha-063873-m03) Calling .GetState
	I0203 10:51:00.386826  130347 status.go:371] ha-063873-m03 host status = "Running" (err=<nil>)
	I0203 10:51:00.386845  130347 host.go:66] Checking if "ha-063873-m03" exists ...
	I0203 10:51:00.387107  130347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:51:00.387143  130347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:51:00.402694  130347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I0203 10:51:00.403200  130347 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:51:00.403835  130347 main.go:141] libmachine: Using API Version  1
	I0203 10:51:00.403865  130347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:51:00.404230  130347 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:51:00.404504  130347 main.go:141] libmachine: (ha-063873-m03) Calling .GetIP
	I0203 10:51:00.407973  130347 main.go:141] libmachine: (ha-063873-m03) DBG | domain ha-063873-m03 has defined MAC address 52:54:00:e5:6c:9e in network mk-ha-063873
	I0203 10:51:00.408350  130347 main.go:141] libmachine: (ha-063873-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:6c:9e", ip: ""} in network mk-ha-063873: {Iface:virbr1 ExpiryTime:2025-02-03 11:47:09 +0000 UTC Type:0 Mac:52:54:00:e5:6c:9e Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-063873-m03 Clientid:01:52:54:00:e5:6c:9e}
	I0203 10:51:00.408382  130347 main.go:141] libmachine: (ha-063873-m03) DBG | domain ha-063873-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:6c:9e in network mk-ha-063873
	I0203 10:51:00.408590  130347 host.go:66] Checking if "ha-063873-m03" exists ...
	I0203 10:51:00.408878  130347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:51:00.408916  130347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:51:00.424428  130347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0203 10:51:00.424834  130347 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:51:00.425320  130347 main.go:141] libmachine: Using API Version  1
	I0203 10:51:00.425339  130347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:51:00.425698  130347 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:51:00.425882  130347 main.go:141] libmachine: (ha-063873-m03) Calling .DriverName
	I0203 10:51:00.426188  130347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 10:51:00.426213  130347 main.go:141] libmachine: (ha-063873-m03) Calling .GetSSHHostname
	I0203 10:51:00.429314  130347 main.go:141] libmachine: (ha-063873-m03) DBG | domain ha-063873-m03 has defined MAC address 52:54:00:e5:6c:9e in network mk-ha-063873
	I0203 10:51:00.429750  130347 main.go:141] libmachine: (ha-063873-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:6c:9e", ip: ""} in network mk-ha-063873: {Iface:virbr1 ExpiryTime:2025-02-03 11:47:09 +0000 UTC Type:0 Mac:52:54:00:e5:6c:9e Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-063873-m03 Clientid:01:52:54:00:e5:6c:9e}
	I0203 10:51:00.429780  130347 main.go:141] libmachine: (ha-063873-m03) DBG | domain ha-063873-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:6c:9e in network mk-ha-063873
	I0203 10:51:00.430093  130347 main.go:141] libmachine: (ha-063873-m03) Calling .GetSSHPort
	I0203 10:51:00.430308  130347 main.go:141] libmachine: (ha-063873-m03) Calling .GetSSHKeyPath
	I0203 10:51:00.430485  130347 main.go:141] libmachine: (ha-063873-m03) Calling .GetSSHUsername
	I0203 10:51:00.430624  130347 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/ha-063873-m03/id_rsa Username:docker}
	I0203 10:51:00.515053  130347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 10:51:00.533829  130347 kubeconfig.go:125] found "ha-063873" server: "https://192.168.39.254:8443"
	I0203 10:51:00.533876  130347 api_server.go:166] Checking apiserver status ...
	I0203 10:51:00.533909  130347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 10:51:00.551257  130347 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup
	W0203 10:51:00.562165  130347 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0203 10:51:00.562225  130347 ssh_runner.go:195] Run: ls
	I0203 10:51:00.568051  130347 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0203 10:51:00.573531  130347 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0203 10:51:00.573555  130347 status.go:463] ha-063873-m03 apiserver status = Running (err=<nil>)
	I0203 10:51:00.573563  130347 status.go:176] ha-063873-m03 status: &{Name:ha-063873-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 10:51:00.573579  130347 status.go:174] checking status of ha-063873-m04 ...
	I0203 10:51:00.573853  130347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:51:00.573887  130347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:51:00.589197  130347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0203 10:51:00.589662  130347 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:51:00.590175  130347 main.go:141] libmachine: Using API Version  1
	I0203 10:51:00.590204  130347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:51:00.590529  130347 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:51:00.590705  130347 main.go:141] libmachine: (ha-063873-m04) Calling .GetState
	I0203 10:51:00.592419  130347 status.go:371] ha-063873-m04 host status = "Running" (err=<nil>)
	I0203 10:51:00.592434  130347 host.go:66] Checking if "ha-063873-m04" exists ...
	I0203 10:51:00.592722  130347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:51:00.592765  130347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:51:00.608990  130347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34495
	I0203 10:51:00.609446  130347 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:51:00.609927  130347 main.go:141] libmachine: Using API Version  1
	I0203 10:51:00.609949  130347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:51:00.610358  130347 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:51:00.610586  130347 main.go:141] libmachine: (ha-063873-m04) Calling .GetIP
	I0203 10:51:00.613019  130347 main.go:141] libmachine: (ha-063873-m04) DBG | domain ha-063873-m04 has defined MAC address 52:54:00:fe:c4:db in network mk-ha-063873
	I0203 10:51:00.613434  130347 main.go:141] libmachine: (ha-063873-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:db", ip: ""} in network mk-ha-063873: {Iface:virbr1 ExpiryTime:2025-02-03 11:48:35 +0000 UTC Type:0 Mac:52:54:00:fe:c4:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-063873-m04 Clientid:01:52:54:00:fe:c4:db}
	I0203 10:51:00.613463  130347 main.go:141] libmachine: (ha-063873-m04) DBG | domain ha-063873-m04 has defined IP address 192.168.39.89 and MAC address 52:54:00:fe:c4:db in network mk-ha-063873
	I0203 10:51:00.613581  130347 host.go:66] Checking if "ha-063873-m04" exists ...
	I0203 10:51:00.613913  130347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 10:51:00.613959  130347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 10:51:00.629557  130347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37315
	I0203 10:51:00.629943  130347 main.go:141] libmachine: () Calling .GetVersion
	I0203 10:51:00.630500  130347 main.go:141] libmachine: Using API Version  1
	I0203 10:51:00.630523  130347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 10:51:00.630795  130347 main.go:141] libmachine: () Calling .GetMachineName
	I0203 10:51:00.631033  130347 main.go:141] libmachine: (ha-063873-m04) Calling .DriverName
	I0203 10:51:00.631238  130347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 10:51:00.631260  130347 main.go:141] libmachine: (ha-063873-m04) Calling .GetSSHHostname
	I0203 10:51:00.634297  130347 main.go:141] libmachine: (ha-063873-m04) DBG | domain ha-063873-m04 has defined MAC address 52:54:00:fe:c4:db in network mk-ha-063873
	I0203 10:51:00.634745  130347 main.go:141] libmachine: (ha-063873-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:db", ip: ""} in network mk-ha-063873: {Iface:virbr1 ExpiryTime:2025-02-03 11:48:35 +0000 UTC Type:0 Mac:52:54:00:fe:c4:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-063873-m04 Clientid:01:52:54:00:fe:c4:db}
	I0203 10:51:00.634762  130347 main.go:141] libmachine: (ha-063873-m04) DBG | domain ha-063873-m04 has defined IP address 192.168.39.89 and MAC address 52:54:00:fe:c4:db in network mk-ha-063873
	I0203 10:51:00.634917  130347 main.go:141] libmachine: (ha-063873-m04) Calling .GetSSHPort
	I0203 10:51:00.635109  130347 main.go:141] libmachine: (ha-063873-m04) Calling .GetSSHKeyPath
	I0203 10:51:00.635249  130347 main.go:141] libmachine: (ha-063873-m04) Calling .GetSSHUsername
	I0203 10:51:00.635381  130347 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/ha-063873-m04/id_rsa Username:docker}
	I0203 10:51:00.721898  130347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 10:51:00.739443  130347 status.go:176] ha-063873-m04 status: &{Name:ha-063873-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (51.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 node start m02 -v=7 --alsologtostderr
E0203 10:51:43.991636  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-063873 node start m02 -v=7 --alsologtostderr: (50.179345211s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (51.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (426.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-063873 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-063873 -v=7 --alsologtostderr
E0203 10:54:00.130354  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:54:27.833399  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 10:55:42.117508  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-063873 -v=7 --alsologtostderr: (4m34.266547804s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-063873 --wait=true -v=7 --alsologtostderr
E0203 10:57:05.192899  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-063873 --wait=true -v=7 --alsologtostderr: (2m31.809667314s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-063873
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (426.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 node delete m03 -v=7 --alsologtostderr
E0203 10:59:00.130447  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-063873 node delete m03 -v=7 --alsologtostderr: (17.361044723s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 stop -v=7 --alsologtostderr
E0203 11:00:42.117783  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-063873 stop -v=7 --alsologtostderr: (4m32.832527612s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-063873 status -v=7 --alsologtostderr: exit status 7 (113.684562ms)

                                                
                                                
-- stdout --
	ha-063873
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-063873-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-063873-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:03:51.277253  134580 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:03:51.277353  134580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:03:51.277364  134580 out.go:358] Setting ErrFile to fd 2...
	I0203 11:03:51.277369  134580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:03:51.277535  134580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:03:51.277714  134580 out.go:352] Setting JSON to false
	I0203 11:03:51.277751  134580 mustload.go:65] Loading cluster: ha-063873
	I0203 11:03:51.277820  134580 notify.go:220] Checking for updates...
	I0203 11:03:51.278374  134580 config.go:182] Loaded profile config "ha-063873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:03:51.278405  134580 status.go:174] checking status of ha-063873 ...
	I0203 11:03:51.278899  134580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:03:51.278943  134580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:03:51.302191  134580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45111
	I0203 11:03:51.302693  134580 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:03:51.303406  134580 main.go:141] libmachine: Using API Version  1
	I0203 11:03:51.303439  134580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:03:51.303898  134580 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:03:51.304090  134580 main.go:141] libmachine: (ha-063873) Calling .GetState
	I0203 11:03:51.305862  134580 status.go:371] ha-063873 host status = "Stopped" (err=<nil>)
	I0203 11:03:51.305880  134580 status.go:384] host is not running, skipping remaining checks
	I0203 11:03:51.305887  134580 status.go:176] ha-063873 status: &{Name:ha-063873 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:03:51.305909  134580 status.go:174] checking status of ha-063873-m02 ...
	I0203 11:03:51.306274  134580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:03:51.306319  134580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:03:51.320926  134580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42455
	I0203 11:03:51.321351  134580 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:03:51.321834  134580 main.go:141] libmachine: Using API Version  1
	I0203 11:03:51.321863  134580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:03:51.322196  134580 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:03:51.322400  134580 main.go:141] libmachine: (ha-063873-m02) Calling .GetState
	I0203 11:03:51.324143  134580 status.go:371] ha-063873-m02 host status = "Stopped" (err=<nil>)
	I0203 11:03:51.324159  134580 status.go:384] host is not running, skipping remaining checks
	I0203 11:03:51.324167  134580 status.go:176] ha-063873-m02 status: &{Name:ha-063873-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:03:51.324197  134580 status.go:174] checking status of ha-063873-m04 ...
	I0203 11:03:51.324521  134580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:03:51.324564  134580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:03:51.339147  134580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0203 11:03:51.339513  134580 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:03:51.339989  134580 main.go:141] libmachine: Using API Version  1
	I0203 11:03:51.340021  134580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:03:51.340327  134580 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:03:51.340504  134580 main.go:141] libmachine: (ha-063873-m04) Calling .GetState
	I0203 11:03:51.341942  134580 status.go:371] ha-063873-m04 host status = "Stopped" (err=<nil>)
	I0203 11:03:51.341958  134580 status.go:384] host is not running, skipping remaining checks
	I0203 11:03:51.341965  134580 status.go:176] ha-063873-m04 status: &{Name:ha-063873-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (120.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-063873 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0203 11:04:00.130297  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:05:23.195326  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:05:42.117592  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-063873 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m0.10916029s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (120.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-063873 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-063873 --control-plane -v=7 --alsologtostderr: (1m15.875674527s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-063873 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (57.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-817561 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-817561 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (57.674775271s)
--- PASS: TestJSONOutput/start/Command (57.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-817561 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-817561 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-817561 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-817561 --output=json --user=testUser: (7.327132563s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-592178 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-592178 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.07373ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ba42182d-2763-4596-b60b-928d5c958d67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-592178] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c309937a-4cb7-4319-92e2-be94ae28caab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20354"}}
	{"specversion":"1.0","id":"5e3cec09-3380-4fbb-ae38-9700eb8ab090","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"abeef3f9-f9d4-4278-bb78-4eaa14242b18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig"}}
	{"specversion":"1.0","id":"12c92592-3644-49a7-af99-5568e1480a26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube"}}
	{"specversion":"1.0","id":"e8b722f3-2426-4a23-91c5-6591e26e386c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"61dab6f4-1eec-420c-9d37-414d202db845","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f3f9f044-4c20-405e-92c7-40ed67f83e6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-592178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-592178
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (90.4s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-271346 --driver=kvm2  --container-runtime=crio
E0203 11:09:00.135397  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-271346 --driver=kvm2  --container-runtime=crio: (43.26805428s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-286279 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-286279 --driver=kvm2  --container-runtime=crio: (44.040180921s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-271346
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-286279
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-286279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-286279
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-286279: (1.003194479s)
helpers_test.go:175: Cleaning up "first-271346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-271346
--- PASS: TestMinikubeProfile (90.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-723943 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-723943 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.724256491s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-723943 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-723943 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-739857 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0203 11:10:42.117176  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-739857 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.863883193s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-739857 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-739857 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-723943 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-739857 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-739857 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-739857
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-739857: (1.275605185s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.56s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-739857
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-739857: (23.564325437s)
--- PASS: TestMountStart/serial/RestartStopped (24.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-739857 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-739857 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-728008 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-728008 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.535109531s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-728008 -- rollout status deployment/busybox: (4.01311819s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- exec busybox-58667487b6-bv876 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- exec busybox-58667487b6-l86hz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- exec busybox-58667487b6-bv876 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- exec busybox-58667487b6-l86hz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- exec busybox-58667487b6-bv876 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- exec busybox-58667487b6-l86hz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- exec busybox-58667487b6-bv876 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- exec busybox-58667487b6-bv876 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- exec busybox-58667487b6-l86hz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-728008 -- exec busybox-58667487b6-l86hz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-728008 -v 3 --alsologtostderr
E0203 11:13:45.195245  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:14:00.130843  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-728008 -v 3 --alsologtostderr: (50.119650465s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.71s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-728008 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 cp testdata/cp-test.txt multinode-728008:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 cp multinode-728008:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3852905076/001/cp-test_multinode-728008.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 cp multinode-728008:/home/docker/cp-test.txt multinode-728008-m02:/home/docker/cp-test_multinode-728008_multinode-728008-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008-m02 "sudo cat /home/docker/cp-test_multinode-728008_multinode-728008-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 cp multinode-728008:/home/docker/cp-test.txt multinode-728008-m03:/home/docker/cp-test_multinode-728008_multinode-728008-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008-m03 "sudo cat /home/docker/cp-test_multinode-728008_multinode-728008-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 cp testdata/cp-test.txt multinode-728008-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 cp multinode-728008-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3852905076/001/cp-test_multinode-728008-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 cp multinode-728008-m02:/home/docker/cp-test.txt multinode-728008:/home/docker/cp-test_multinode-728008-m02_multinode-728008.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008 "sudo cat /home/docker/cp-test_multinode-728008-m02_multinode-728008.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 cp multinode-728008-m02:/home/docker/cp-test.txt multinode-728008-m03:/home/docker/cp-test_multinode-728008-m02_multinode-728008-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008-m03 "sudo cat /home/docker/cp-test_multinode-728008-m02_multinode-728008-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 cp testdata/cp-test.txt multinode-728008-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 cp multinode-728008-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3852905076/001/cp-test_multinode-728008-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 cp multinode-728008-m03:/home/docker/cp-test.txt multinode-728008:/home/docker/cp-test_multinode-728008-m03_multinode-728008.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008 "sudo cat /home/docker/cp-test_multinode-728008-m03_multinode-728008.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 cp multinode-728008-m03:/home/docker/cp-test.txt multinode-728008-m02:/home/docker/cp-test_multinode-728008-m03_multinode-728008-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 ssh -n multinode-728008-m02 "sudo cat /home/docker/cp-test_multinode-728008-m03_multinode-728008-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-728008 node stop m03: (1.388953159s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-728008 status: exit status 7 (433.969931ms)

                                                
                                                
-- stdout --
	multinode-728008
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-728008-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-728008-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-728008 status --alsologtostderr: exit status 7 (438.924342ms)

                                                
                                                
-- stdout --
	multinode-728008
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-728008-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-728008-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:14:24.084657  142369 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:14:24.084771  142369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:14:24.084783  142369 out.go:358] Setting ErrFile to fd 2...
	I0203 11:14:24.084788  142369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:14:24.084957  142369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:14:24.085115  142369 out.go:352] Setting JSON to false
	I0203 11:14:24.085146  142369 mustload.go:65] Loading cluster: multinode-728008
	I0203 11:14:24.085280  142369 notify.go:220] Checking for updates...
	I0203 11:14:24.085554  142369 config.go:182] Loaded profile config "multinode-728008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:14:24.085575  142369 status.go:174] checking status of multinode-728008 ...
	I0203 11:14:24.085954  142369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:14:24.086023  142369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:14:24.103379  142369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I0203 11:14:24.103857  142369 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:14:24.104428  142369 main.go:141] libmachine: Using API Version  1
	I0203 11:14:24.104459  142369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:14:24.104803  142369 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:14:24.105006  142369 main.go:141] libmachine: (multinode-728008) Calling .GetState
	I0203 11:14:24.106592  142369 status.go:371] multinode-728008 host status = "Running" (err=<nil>)
	I0203 11:14:24.106613  142369 host.go:66] Checking if "multinode-728008" exists ...
	I0203 11:14:24.107044  142369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:14:24.107096  142369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:14:24.124102  142369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39125
	I0203 11:14:24.124619  142369 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:14:24.125224  142369 main.go:141] libmachine: Using API Version  1
	I0203 11:14:24.125248  142369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:14:24.125591  142369 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:14:24.125798  142369 main.go:141] libmachine: (multinode-728008) Calling .GetIP
	I0203 11:14:24.128932  142369 main.go:141] libmachine: (multinode-728008) DBG | domain multinode-728008 has defined MAC address 52:54:00:4d:4c:33 in network mk-multinode-728008
	I0203 11:14:24.129396  142369 main.go:141] libmachine: (multinode-728008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:4c:33", ip: ""} in network mk-multinode-728008: {Iface:virbr1 ExpiryTime:2025-02-03 12:11:36 +0000 UTC Type:0 Mac:52:54:00:4d:4c:33 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:multinode-728008 Clientid:01:52:54:00:4d:4c:33}
	I0203 11:14:24.129433  142369 main.go:141] libmachine: (multinode-728008) DBG | domain multinode-728008 has defined IP address 192.168.39.80 and MAC address 52:54:00:4d:4c:33 in network mk-multinode-728008
	I0203 11:14:24.129627  142369 host.go:66] Checking if "multinode-728008" exists ...
	I0203 11:14:24.130096  142369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:14:24.130156  142369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:14:24.145784  142369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I0203 11:14:24.146264  142369 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:14:24.146746  142369 main.go:141] libmachine: Using API Version  1
	I0203 11:14:24.146769  142369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:14:24.147078  142369 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:14:24.147297  142369 main.go:141] libmachine: (multinode-728008) Calling .DriverName
	I0203 11:14:24.147509  142369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 11:14:24.147554  142369 main.go:141] libmachine: (multinode-728008) Calling .GetSSHHostname
	I0203 11:14:24.150677  142369 main.go:141] libmachine: (multinode-728008) DBG | domain multinode-728008 has defined MAC address 52:54:00:4d:4c:33 in network mk-multinode-728008
	I0203 11:14:24.151176  142369 main.go:141] libmachine: (multinode-728008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:4c:33", ip: ""} in network mk-multinode-728008: {Iface:virbr1 ExpiryTime:2025-02-03 12:11:36 +0000 UTC Type:0 Mac:52:54:00:4d:4c:33 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:multinode-728008 Clientid:01:52:54:00:4d:4c:33}
	I0203 11:14:24.151222  142369 main.go:141] libmachine: (multinode-728008) DBG | domain multinode-728008 has defined IP address 192.168.39.80 and MAC address 52:54:00:4d:4c:33 in network mk-multinode-728008
	I0203 11:14:24.151294  142369 main.go:141] libmachine: (multinode-728008) Calling .GetSSHPort
	I0203 11:14:24.151468  142369 main.go:141] libmachine: (multinode-728008) Calling .GetSSHKeyPath
	I0203 11:14:24.151600  142369 main.go:141] libmachine: (multinode-728008) Calling .GetSSHUsername
	I0203 11:14:24.151698  142369 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/multinode-728008/id_rsa Username:docker}
	I0203 11:14:24.237148  142369 ssh_runner.go:195] Run: systemctl --version
	I0203 11:14:24.242822  142369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:14:24.256673  142369 kubeconfig.go:125] found "multinode-728008" server: "https://192.168.39.80:8443"
	I0203 11:14:24.256710  142369 api_server.go:166] Checking apiserver status ...
	I0203 11:14:24.256745  142369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:14:24.269190  142369 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup
	W0203 11:14:24.279301  142369 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0203 11:14:24.279347  142369 ssh_runner.go:195] Run: ls
	I0203 11:14:24.283348  142369 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0203 11:14:24.287970  142369 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0203 11:14:24.287990  142369 status.go:463] multinode-728008 apiserver status = Running (err=<nil>)
	I0203 11:14:24.287998  142369 status.go:176] multinode-728008 status: &{Name:multinode-728008 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:14:24.288014  142369 status.go:174] checking status of multinode-728008-m02 ...
	I0203 11:14:24.288294  142369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:14:24.288329  142369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:14:24.303802  142369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34003
	I0203 11:14:24.304322  142369 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:14:24.304798  142369 main.go:141] libmachine: Using API Version  1
	I0203 11:14:24.304820  142369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:14:24.305067  142369 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:14:24.305218  142369 main.go:141] libmachine: (multinode-728008-m02) Calling .GetState
	I0203 11:14:24.306697  142369 status.go:371] multinode-728008-m02 host status = "Running" (err=<nil>)
	I0203 11:14:24.306715  142369 host.go:66] Checking if "multinode-728008-m02" exists ...
	I0203 11:14:24.306986  142369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:14:24.307025  142369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:14:24.322504  142369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
	I0203 11:14:24.322992  142369 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:14:24.323457  142369 main.go:141] libmachine: Using API Version  1
	I0203 11:14:24.323477  142369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:14:24.323824  142369 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:14:24.324001  142369 main.go:141] libmachine: (multinode-728008-m02) Calling .GetIP
	I0203 11:14:24.327205  142369 main.go:141] libmachine: (multinode-728008-m02) DBG | domain multinode-728008-m02 has defined MAC address 52:54:00:10:2e:fb in network mk-multinode-728008
	I0203 11:14:24.327668  142369 main.go:141] libmachine: (multinode-728008-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:2e:fb", ip: ""} in network mk-multinode-728008: {Iface:virbr1 ExpiryTime:2025-02-03 12:12:42 +0000 UTC Type:0 Mac:52:54:00:10:2e:fb Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-728008-m02 Clientid:01:52:54:00:10:2e:fb}
	I0203 11:14:24.327715  142369 main.go:141] libmachine: (multinode-728008-m02) DBG | domain multinode-728008-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:10:2e:fb in network mk-multinode-728008
	I0203 11:14:24.327879  142369 host.go:66] Checking if "multinode-728008-m02" exists ...
	I0203 11:14:24.328172  142369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:14:24.328220  142369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:14:24.344209  142369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0203 11:14:24.344600  142369 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:14:24.345043  142369 main.go:141] libmachine: Using API Version  1
	I0203 11:14:24.345073  142369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:14:24.345377  142369 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:14:24.345593  142369 main.go:141] libmachine: (multinode-728008-m02) Calling .DriverName
	I0203 11:14:24.345766  142369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 11:14:24.345789  142369 main.go:141] libmachine: (multinode-728008-m02) Calling .GetSSHHostname
	I0203 11:14:24.348340  142369 main.go:141] libmachine: (multinode-728008-m02) DBG | domain multinode-728008-m02 has defined MAC address 52:54:00:10:2e:fb in network mk-multinode-728008
	I0203 11:14:24.348736  142369 main.go:141] libmachine: (multinode-728008-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:2e:fb", ip: ""} in network mk-multinode-728008: {Iface:virbr1 ExpiryTime:2025-02-03 12:12:42 +0000 UTC Type:0 Mac:52:54:00:10:2e:fb Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-728008-m02 Clientid:01:52:54:00:10:2e:fb}
	I0203 11:14:24.348767  142369 main.go:141] libmachine: (multinode-728008-m02) DBG | domain multinode-728008-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:10:2e:fb in network mk-multinode-728008
	I0203 11:14:24.348873  142369 main.go:141] libmachine: (multinode-728008-m02) Calling .GetSSHPort
	I0203 11:14:24.349054  142369 main.go:141] libmachine: (multinode-728008-m02) Calling .GetSSHKeyPath
	I0203 11:14:24.349221  142369 main.go:141] libmachine: (multinode-728008-m02) Calling .GetSSHUsername
	I0203 11:14:24.349335  142369 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20354-109432/.minikube/machines/multinode-728008-m02/id_rsa Username:docker}
	I0203 11:14:24.433724  142369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:14:24.452729  142369 status.go:176] multinode-728008-m02 status: &{Name:multinode-728008-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:14:24.452790  142369 status.go:174] checking status of multinode-728008-m03 ...
	I0203 11:14:24.453238  142369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:14:24.453293  142369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:14:24.470555  142369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46143
	I0203 11:14:24.471014  142369 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:14:24.471429  142369 main.go:141] libmachine: Using API Version  1
	I0203 11:14:24.471462  142369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:14:24.471846  142369 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:14:24.472083  142369 main.go:141] libmachine: (multinode-728008-m03) Calling .GetState
	I0203 11:14:24.473841  142369 status.go:371] multinode-728008-m03 host status = "Stopped" (err=<nil>)
	I0203 11:14:24.473858  142369 status.go:384] host is not running, skipping remaining checks
	I0203 11:14:24.473866  142369 status.go:176] multinode-728008-m03 status: &{Name:multinode-728008-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-728008 node start m03 -v=7 --alsologtostderr: (38.37308837s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (342.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-728008
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-728008
E0203 11:15:42.118656  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-728008: (3m3.38052242s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-728008 --wait=true -v=8 --alsologtostderr
E0203 11:19:00.130914  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:20:42.118025  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-728008 --wait=true -v=8 --alsologtostderr: (2m39.286363452s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-728008
--- PASS: TestMultiNode/serial/RestartKeepsNodes (342.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-728008 node delete m03: (2.075987633s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 stop
E0203 11:22:03.197414  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-728008 stop: (3m1.680927577s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-728008 status: exit status 7 (92.435502ms)

                                                
                                                
-- stdout --
	multinode-728008
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-728008-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-728008 status --alsologtostderr: exit status 7 (93.702207ms)

                                                
                                                
-- stdout --
	multinode-728008
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-728008-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:23:50.794171  145834 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:23:50.794283  145834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:23:50.794299  145834 out.go:358] Setting ErrFile to fd 2...
	I0203 11:23:50.794303  145834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:23:50.794516  145834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:23:50.794723  145834 out.go:352] Setting JSON to false
	I0203 11:23:50.794763  145834 mustload.go:65] Loading cluster: multinode-728008
	I0203 11:23:50.794852  145834 notify.go:220] Checking for updates...
	I0203 11:23:50.795268  145834 config.go:182] Loaded profile config "multinode-728008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:23:50.795290  145834 status.go:174] checking status of multinode-728008 ...
	I0203 11:23:50.795737  145834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:23:50.795815  145834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:23:50.810488  145834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36491
	I0203 11:23:50.811006  145834 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:23:50.811701  145834 main.go:141] libmachine: Using API Version  1
	I0203 11:23:50.811738  145834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:23:50.812056  145834 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:23:50.812266  145834 main.go:141] libmachine: (multinode-728008) Calling .GetState
	I0203 11:23:50.813580  145834 status.go:371] multinode-728008 host status = "Stopped" (err=<nil>)
	I0203 11:23:50.813597  145834 status.go:384] host is not running, skipping remaining checks
	I0203 11:23:50.813604  145834 status.go:176] multinode-728008 status: &{Name:multinode-728008 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:23:50.813650  145834 status.go:174] checking status of multinode-728008-m02 ...
	I0203 11:23:50.813966  145834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0203 11:23:50.814026  145834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0203 11:23:50.828753  145834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39385
	I0203 11:23:50.829284  145834 main.go:141] libmachine: () Calling .GetVersion
	I0203 11:23:50.829807  145834 main.go:141] libmachine: Using API Version  1
	I0203 11:23:50.829831  145834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0203 11:23:50.830198  145834 main.go:141] libmachine: () Calling .GetMachineName
	I0203 11:23:50.830486  145834 main.go:141] libmachine: (multinode-728008-m02) Calling .GetState
	I0203 11:23:50.832138  145834 status.go:371] multinode-728008-m02 host status = "Stopped" (err=<nil>)
	I0203 11:23:50.832153  145834 status.go:384] host is not running, skipping remaining checks
	I0203 11:23:50.832161  145834 status.go:176] multinode-728008-m02 status: &{Name:multinode-728008-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (115.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-728008 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0203 11:24:00.131224  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:25:42.117340  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-728008 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.416926673s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-728008 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (115.96s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-728008
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-728008-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-728008-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (62.381194ms)

                                                
                                                
-- stdout --
	* [multinode-728008-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-728008-m02' is duplicated with machine name 'multinode-728008-m02' in profile 'multinode-728008'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-728008-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-728008-m03 --driver=kvm2  --container-runtime=crio: (43.287479034s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-728008
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-728008: exit status 80 (217.570747ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-728008 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-728008-m03 already exists in multinode-728008-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-728008-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.39s)

                                                
                                    
x
+
TestScheduledStopUnix (115.57s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-919756 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-919756 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.791054602s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-919756 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-919756 -n scheduled-stop-919756
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-919756 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0203 11:30:13.551543  116606 retry.go:31] will retry after 69.044µs: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.552754  116606 retry.go:31] will retry after 151.447µs: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.553921  116606 retry.go:31] will retry after 247.953µs: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.555100  116606 retry.go:31] will retry after 425.023µs: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.556248  116606 retry.go:31] will retry after 392.627µs: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.557440  116606 retry.go:31] will retry after 1.092201ms: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.558627  116606 retry.go:31] will retry after 1.055155ms: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.559801  116606 retry.go:31] will retry after 2.160398ms: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.563017  116606 retry.go:31] will retry after 1.645886ms: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.565300  116606 retry.go:31] will retry after 4.063422ms: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.569491  116606 retry.go:31] will retry after 7.756679ms: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.577752  116606 retry.go:31] will retry after 10.956106ms: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.589051  116606 retry.go:31] will retry after 12.886123ms: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.602429  116606 retry.go:31] will retry after 10.985004ms: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.613733  116606 retry.go:31] will retry after 27.385733ms: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
I0203 11:30:13.642163  116606 retry.go:31] will retry after 54.13779ms: open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/scheduled-stop-919756/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-919756 --cancel-scheduled
E0203 11:30:25.198881  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-919756 -n scheduled-stop-919756
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-919756
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-919756 --schedule 15s
E0203 11:30:42.118619  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-919756
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-919756: exit status 7 (75.023191ms)

                                                
                                                
-- stdout --
	scheduled-stop-919756
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-919756 -n scheduled-stop-919756
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-919756 -n scheduled-stop-919756: exit status 7 (69.464021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-919756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-919756
--- PASS: TestScheduledStopUnix (115.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (233.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1538935891 start -p running-upgrade-191474 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1538935891 start -p running-upgrade-191474 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m7.52997283s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-191474 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-191474 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m41.445500855s)
helpers_test.go:175: Cleaning up "running-upgrade-191474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-191474
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-191474: (1.200724501s)
--- PASS: TestRunningBinaryUpgrade (233.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-178849 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-178849 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (86.464926ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-178849] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-178849 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-178849 --driver=kvm2  --container-runtime=crio: (1m35.928888282s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-178849 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (69.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-178849 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-178849 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m7.904679721s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-178849 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-178849 status -o json: exit status 2 (296.701194ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-178849","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-178849
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-178849: (1.464418977s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (69.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-178849 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-178849 --no-kubernetes --driver=kvm2  --container-runtime=crio: (52.221098499s)
--- PASS: TestNoKubernetes/serial/Start (52.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-927018 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-927018 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (122.351624ms)

                                                
                                                
-- stdout --
	* [false-927018] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:34:58.302726  153230 out.go:345] Setting OutFile to fd 1 ...
	I0203 11:34:58.302855  153230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:34:58.302864  153230 out.go:358] Setting ErrFile to fd 2...
	I0203 11:34:58.302871  153230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:34:58.303067  153230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20354-109432/.minikube/bin
	I0203 11:34:58.303676  153230 out.go:352] Setting JSON to false
	I0203 11:34:58.304625  153230 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8240,"bootTime":1738574258,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 11:34:58.304731  153230 start.go:139] virtualization: kvm guest
	I0203 11:34:58.307346  153230 out.go:177] * [false-927018] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 11:34:58.309190  153230 notify.go:220] Checking for updates...
	I0203 11:34:58.309204  153230 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:34:58.310951  153230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:34:58.312499  153230 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20354-109432/kubeconfig
	I0203 11:34:58.313913  153230 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20354-109432/.minikube
	I0203 11:34:58.315446  153230 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 11:34:58.316822  153230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:34:58.318831  153230 config.go:182] Loaded profile config "NoKubernetes-178849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0203 11:34:58.318936  153230 config.go:182] Loaded profile config "cert-expiration-149645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0203 11:34:58.319044  153230 config.go:182] Loaded profile config "running-upgrade-191474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0203 11:34:58.319155  153230 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:34:58.357316  153230 out.go:177] * Using the kvm2 driver based on user configuration
	I0203 11:34:58.358652  153230 start.go:297] selected driver: kvm2
	I0203 11:34:58.358679  153230 start.go:901] validating driver "kvm2" against <nil>
	I0203 11:34:58.358696  153230 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:34:58.360894  153230 out.go:201] 
	W0203 11:34:58.362537  153230 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0203 11:34:58.364267  153230 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-927018 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-927018

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-927018

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-927018

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-927018

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-927018

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-927018

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-927018

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-927018

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-927018

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-927018

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-927018

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-927018" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-927018" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:34:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.82:8443
name: cert-expiration-149645
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:34:27 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.224:8443
name: running-upgrade-191474
contexts:
- context:
cluster: cert-expiration-149645
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:34:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-149645
name: cert-expiration-149645
- context:
cluster: running-upgrade-191474
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:34:27 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-191474
name: running-upgrade-191474
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-149645
user:
client-certificate: /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/cert-expiration-149645/client.crt
client-key: /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/cert-expiration-149645/client.key
- name: running-upgrade-191474
user:
client-certificate: /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/running-upgrade-191474/client.crt
client-key: /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/running-upgrade-191474/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-927018

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-927018"

                                                
                                                
----------------------- debugLogs end: false-927018 [took: 3.657543598s] --------------------------------
helpers_test.go:175: Cleaning up "false-927018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-927018
--- PASS: TestNetworkPlugins/group/false (3.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-178849 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-178849 "sudo systemctl is-active --quiet service kubelet": exit status 1 (228.912901ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.896682189s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-178849
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-178849: (2.310368096s)
--- PASS: TestNoKubernetes/serial/Stop (2.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-178849 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-178849 --driver=kvm2  --container-runtime=crio: (43.858742952s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (124.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2597574880 start -p stopped-upgrade-574710 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0203 11:35:42.118222  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2597574880 start -p stopped-upgrade-574710 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m16.356377163s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2597574880 -p stopped-upgrade-574710 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2597574880 -p stopped-upgrade-574710 stop: (2.140025013s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-574710 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-574710 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.850922s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (124.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-178849 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-178849 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.517436ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (71.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-225830 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-225830 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m11.852653665s)
--- PASS: TestPause/serial/Start (71.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-574710
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (50.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (50.111826253s)
--- PASS: TestNetworkPlugins/group/auto/Start (50.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m8.797735528s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (107.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m47.750504077s)
--- PASS: TestNetworkPlugins/group/calico/Start (107.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-927018 "pgrep -a kubelet"
I0203 11:38:22.288966  116606 config.go:182] Loaded profile config "auto-927018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-927018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8fv9j" [6d4b4a93-c1d0-4154-b0ed-ebd40e55f8f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8fv9j" [6d4b4a93-c1d0-4154-b0ed-ebd40e55f8f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.118232668s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-927018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (80.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0203 11:39:00.130212  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m20.190848276s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (80.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mp2rw" [27f444ef-d5af-4782-8310-a4b147965a09] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003848317s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-927018 "pgrep -a kubelet"
I0203 11:39:14.526646  116606 config.go:182] Loaded profile config "kindnet-927018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-927018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-728nl" [98951eae-923f-428f-8a53-59a42945092b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-728nl" [98951eae-923f-428f-8a53-59a42945092b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.011840416s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-927018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m15.014392448s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fng64" [f0f9fb0c-2b01-4bfd-b461-282e2c5460fe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005231316s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-927018 "pgrep -a kubelet"
I0203 11:39:55.548884  116606 config.go:182] Loaded profile config "calico-927018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-927018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-m8qcd" [f310ce32-1baa-4b64-a0f1-e183b1750934] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-m8qcd" [f310ce32-1baa-4b64-a0f1-e183b1750934] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006313494s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-927018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-927018 "pgrep -a kubelet"
I0203 11:40:08.220674  116606 config.go:182] Loaded profile config "custom-flannel-927018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-927018 replace --force -f testdata/netcat-deployment.yaml
I0203 11:40:09.093950  116606 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-92ff7" [0cb12eeb-ff51-4d98-96a6-1b71a857fd77] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-92ff7" [0cb12eeb-ff51-4d98-96a6-1b71a857fd77] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003356345s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-927018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (78.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m18.353641436s)
--- PASS: TestNetworkPlugins/group/flannel/Start (78.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0203 11:40:42.117755  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-927018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m19.595046891s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-927018 "pgrep -a kubelet"
I0203 11:40:57.933103  116606 config.go:182] Loaded profile config "enable-default-cni-927018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-927018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7p9p8" [fd1a54f5-b565-46ff-b8fe-ec89bfbc9a4c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7p9p8" [fd1a54f5-b565-46ff-b8fe-ec89bfbc9a4c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004640087s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-927018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gz9gm" [49993122-0cc5-4c66-bb50-6de7983ca875] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005413863s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-927018 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-927018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-c4tvs" [8d61ec4d-fa00-4b59-8d1a-3a93a78ba3f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-c4tvs" [8d61ec4d-fa00-4b59-8d1a-3a93a78ba3f4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004288063s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-085638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-085638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m20.669245919s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-927018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-927018 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
I0203 11:41:59.819578  116606 config.go:182] Loaded profile config "bridge-927018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-927018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-927018 replace --force -f testdata/netcat-deployment.yaml: (1.897940331s)
I0203 11:42:01.945390  116606 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0203 11:42:01.952853  116606 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tcn28" [83066000-5a5c-4761-b792-e482da037d9b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-tcn28" [83066000-5a5c-4761-b792-e482da037d9b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004558544s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-927018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-927018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-691067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-691067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m2.617299948s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-138645 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-138645 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m19.508018178s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-085638 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a737a787-3313-4d9e-8d85-6427d2f3e52a] Pending
helpers_test.go:344: "busybox" [a737a787-3313-4d9e-8d85-6427d2f3e52a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005040792s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-085638 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-691067 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [40e26dc1-ff30-45cb-b0f1-628f85f449e3] Pending
helpers_test.go:344: "busybox" [40e26dc1-ff30-45cb-b0f1-628f85f449e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [40e26dc1-ff30-45cb-b0f1-628f85f449e3] Running
E0203 11:43:25.053909  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:43:27.615449  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.00405276s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-691067 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-085638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0203 11:43:22.483526  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:43:22.489930  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:43:22.501377  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:43:22.522747  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:43:22.564212  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:43:22.645767  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-085638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.663907298s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-085638 describe deploy/metrics-server -n kube-system
E0203 11:43:22.808054  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-085638 --alsologtostderr -v=3
E0203 11:43:23.129975  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:43:23.772142  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-085638 --alsologtostderr -v=3: (1m31.054322155s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-691067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-691067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.015327625s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-691067 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-691067 --alsologtostderr -v=3
E0203 11:43:32.736845  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:43:42.978754  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-691067 --alsologtostderr -v=3: (1m31.247889299s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-138645 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9d210b2e-c1f0-481b-a088-ef9de87fb027] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9d210b2e-c1f0-481b-a088-ef9de87fb027] Running
E0203 11:44:00.130196  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/functional-032338/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003719037s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-138645 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-138645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-138645 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-138645 --alsologtostderr -v=3
E0203 11:44:03.460805  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:08.317415  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:08.323875  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:08.335377  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:08.356922  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:08.398438  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:08.479988  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:08.641794  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:08.963981  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:09.605328  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:10.887219  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:13.449274  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:18.571274  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:28.813060  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:44.422688  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/auto-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:49.295076  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:49.329366  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:49.335781  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:49.347226  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:49.368699  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:49.410120  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:49.491661  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:49.653294  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:49.974911  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:50.616984  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:51.898570  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-138645 --alsologtostderr -v=3: (1m31.107222809s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-085638 -n no-preload-085638
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-085638 -n no-preload-085638: exit status 7 (79.259804ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-085638 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (346.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-085638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0203 11:44:54.460497  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:44:59.582706  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-085638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m46.585667134s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-085638 -n no-preload-085638
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (346.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-691067 -n embed-certs-691067
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-691067 -n embed-certs-691067: exit status 7 (79.984452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-691067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (332.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-691067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0203 11:45:09.071566  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:09.078132  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:09.089619  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:09.111096  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:09.152573  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:09.234179  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:09.396325  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:09.718127  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:09.824749  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:10.360248  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:11.642644  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:14.204285  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:19.326581  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:29.568967  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:30.257910  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/kindnet-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:30.306384  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/calico-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-691067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m32.356825253s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-691067 -n embed-certs-691067
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (332.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-138645 -n default-k8s-diff-port-138645
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-138645 -n default-k8s-diff-port-138645: exit status 7 (83.138148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-138645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-138645 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0203 11:45:42.117550  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:50.050613  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:58.251366  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:58.257989  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:58.269860  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:58.291316  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:58.333067  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:58.414329  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:58.576181  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:45:58.898565  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-138645 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m45.844324102s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-138645 -n default-k8s-diff-port-138645
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-517711 --alsologtostderr -v=3
E0203 11:47:20.191163  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-517711 --alsologtostderr -v=3: (6.305767734s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-517711 -n old-k8s-version-517711: exit status 7 (68.448345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-517711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5c56z" [1d5b387a-80e0-4648-a6df-a819fabf4a0b] Running
E0203 11:50:36.775472  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/custom-flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004069519s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xklgm" [1a313653-aeb5-4442-a179-b205033e66ad] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xklgm" [1a313653-aeb5-4442-a179-b205033e66ad] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.006936528s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5c56z" [1d5b387a-80e0-4648-a6df-a819fabf4a0b] Running
E0203 11:50:42.117176  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/addons-106432/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003685624s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-691067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-691067 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-691067 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-691067 -n embed-certs-691067
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-691067 -n embed-certs-691067: exit status 2 (251.551197ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-691067 -n embed-certs-691067
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-691067 -n embed-certs-691067: exit status 2 (255.284027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-691067 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-691067 -n embed-certs-691067
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-691067 -n embed-certs-691067
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xklgm" [1a313653-aeb5-4442-a179-b205033e66ad] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004584004s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-085638 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-586043 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-586043 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (47.344590577s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-085638 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-085638 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-085638 -n no-preload-085638
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-085638 -n no-preload-085638: exit status 2 (284.184596ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-085638 -n no-preload-085638
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-085638 -n no-preload-085638: exit status 2 (256.172662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-085638 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-085638 -n no-preload-085638
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-085638 -n no-preload-085638
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-cx4l4" [90773ba6-ff6e-412f-8954-3e0f65dea0b3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.139084759s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-cx4l4" [90773ba6-ff6e-412f-8954-3e0f65dea0b3] Running
E0203 11:51:25.954622  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/enable-default-cni-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004887759s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-138645 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-138645 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-138645 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-138645 -n default-k8s-diff-port-138645
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-138645 -n default-k8s-diff-port-138645: exit status 2 (269.146835ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-138645 -n default-k8s-diff-port-138645
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-138645 -n default-k8s-diff-port-138645: exit status 2 (272.161957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-138645 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-138645 -n default-k8s-diff-port-138645
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-138645 -n default-k8s-diff-port-138645
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-586043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-586043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.065857465s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-586043 --alsologtostderr -v=3
E0203 11:51:41.616372  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-586043 --alsologtostderr -v=3: (10.698593722s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-586043 -n newest-cni-586043
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-586043 -n newest-cni-586043: exit status 7 (66.144875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-586043 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-586043 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0203 11:52:01.720253  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
E0203 11:52:09.319593  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/flannel-927018/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-586043 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (36.758538177s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-586043 -n newest-cni-586043
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-586043 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-586043 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-586043 -n newest-cni-586043
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-586043 -n newest-cni-586043: exit status 2 (247.544043ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-586043 -n newest-cni-586043
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-586043 -n newest-cni-586043: exit status 2 (239.305087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-586043 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-586043 -n newest-cni-586043
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-586043 -n newest-cni-586043
E0203 11:52:29.423071  116606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/bridge-927018/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
260 TestNetworkPlugins/group/kubenet 3.19
268 TestNetworkPlugins/group/cilium 4.05
281 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-106432 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-927018 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-927018

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-927018

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-927018

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-927018

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-927018

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-927018

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-927018

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-927018

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-927018

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-927018

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-927018

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-927018" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-927018" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:34:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.82:8443
name: cert-expiration-149645
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:34:27 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.224:8443
name: running-upgrade-191474
contexts:
- context:
cluster: cert-expiration-149645
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:34:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-149645
name: cert-expiration-149645
- context:
cluster: running-upgrade-191474
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:34:27 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-191474
name: running-upgrade-191474
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-149645
user:
client-certificate: /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/cert-expiration-149645/client.crt
client-key: /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/cert-expiration-149645/client.key
- name: running-upgrade-191474
user:
client-certificate: /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/running-upgrade-191474/client.crt
client-key: /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/running-upgrade-191474/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-927018

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-927018"

                                                
                                                
----------------------- debugLogs end: kubenet-927018 [took: 3.014662383s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-927018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-927018
--- SKIP: TestNetworkPlugins/group/kubenet (3.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-927018 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-927018" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:34:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.82:8443
name: cert-expiration-149645
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20354-109432/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:34:27 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.224:8443
name: running-upgrade-191474
contexts:
- context:
cluster: cert-expiration-149645
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:34:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-149645
name: cert-expiration-149645
- context:
cluster: running-upgrade-191474
extensions:
- extension:
last-update: Mon, 03 Feb 2025 11:34:27 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-191474
name: running-upgrade-191474
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-149645
user:
client-certificate: /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/cert-expiration-149645/client.crt
client-key: /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/cert-expiration-149645/client.key
- name: running-upgrade-191474
user:
client-certificate: /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/running-upgrade-191474/client.crt
client-key: /home/jenkins/minikube-integration/20354-109432/.minikube/profiles/running-upgrade-191474/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-927018

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-927018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-927018"

                                                
                                                
----------------------- debugLogs end: cilium-927018 [took: 3.866036031s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-927018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-927018
--- SKIP: TestNetworkPlugins/group/cilium (4.05s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-518498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-518498
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard