Test Report: KVM_Linux_crio 20288

                    
                      ced131f14e611cbeeb9356239cf0040c87f16008:2025-01-22:38026
                    
                

Test fail (11/318)

x
+
TestAddons/parallel/Ingress (154.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-772234 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-772234 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-772234 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9b0101f5-4f0f-44ea-af44-62a0d91ae084] Pending
helpers_test.go:344: "nginx" [9b0101f5-4f0f-44ea-af44-62a0d91ae084] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9b0101f5-4f0f-44ea-af44-62a0d91ae084] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.007869535s
I0122 20:06:39.678950  254754 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-772234 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.470054188s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-772234 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.58
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-772234 -n addons-772234
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-772234 logs -n 25: (1.54575607s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-489470                                                                     | download-only-489470 | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC | 22 Jan 25 20:02 UTC |
	| delete  | -p download-only-562691                                                                     | download-only-562691 | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC | 22 Jan 25 20:02 UTC |
	| delete  | -p download-only-489470                                                                     | download-only-489470 | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC | 22 Jan 25 20:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-292789 | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC |                     |
	|         | binary-mirror-292789                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43139                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-292789                                                                     | binary-mirror-292789 | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC | 22 Jan 25 20:02 UTC |
	| addons  | enable dashboard -p                                                                         | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC |                     |
	|         | addons-772234                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC |                     |
	|         | addons-772234                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-772234 --wait=true                                                                | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC | 22 Jan 25 20:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-772234 addons disable                                                                | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:05 UTC | 22 Jan 25 20:05 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-772234 addons disable                                                                | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:05 UTC | 22 Jan 25 20:06 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:06 UTC | 22 Jan 25 20:06 UTC |
	|         | -p addons-772234                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-772234 addons                                                                        | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:06 UTC | 22 Jan 25 20:06 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-772234 addons                                                                        | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:06 UTC | 22 Jan 25 20:06 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-772234 addons                                                                        | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:06 UTC | 22 Jan 25 20:06 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-772234 addons disable                                                                | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:06 UTC | 22 Jan 25 20:06 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-772234 ip                                                                            | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:06 UTC | 22 Jan 25 20:06 UTC |
	| addons  | addons-772234 addons disable                                                                | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:06 UTC | 22 Jan 25 20:06 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-772234 ssh cat                                                                       | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:06 UTC | 22 Jan 25 20:06 UTC |
	|         | /opt/local-path-provisioner/pvc-b5af557f-06dc-4193-b387-b33d4ee260a6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-772234 addons                                                                        | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:06 UTC | 22 Jan 25 20:06 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-772234 addons disable                                                                | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:06 UTC | 22 Jan 25 20:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-772234 addons disable                                                                | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:06 UTC | 22 Jan 25 20:06 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-772234 ssh curl -s                                                                   | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:06 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-772234 addons                                                                        | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:07 UTC | 22 Jan 25 20:07 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-772234 addons                                                                        | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:07 UTC | 22 Jan 25 20:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-772234 ip                                                                            | addons-772234        | jenkins | v1.35.0 | 22 Jan 25 20:08 UTC | 22 Jan 25 20:08 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 20:02:24
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 20:02:24.639871  255382 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:02:24.639993  255382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:02:24.640001  255382 out.go:358] Setting ErrFile to fd 2...
	I0122 20:02:24.640005  255382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:02:24.640194  255382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 20:02:24.640941  255382 out.go:352] Setting JSON to false
	I0122 20:02:24.641960  255382 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9891,"bootTime":1737566254,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 20:02:24.642114  255382 start.go:139] virtualization: kvm guest
	I0122 20:02:24.644603  255382 out.go:177] * [addons-772234] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 20:02:24.646432  255382 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 20:02:24.646443  255382 notify.go:220] Checking for updates...
	I0122 20:02:24.649712  255382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 20:02:24.651341  255382 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 20:02:24.652929  255382 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 20:02:24.654581  255382 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 20:02:24.656345  255382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 20:02:24.658499  255382 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 20:02:24.697080  255382 out.go:177] * Using the kvm2 driver based on user configuration
	I0122 20:02:24.698751  255382 start.go:297] selected driver: kvm2
	I0122 20:02:24.698772  255382 start.go:901] validating driver "kvm2" against <nil>
	I0122 20:02:24.698792  255382 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 20:02:24.699932  255382 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:02:24.700084  255382 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 20:02:24.718168  255382 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 20:02:24.718289  255382 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0122 20:02:24.718583  255382 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 20:02:24.718624  255382 cni.go:84] Creating CNI manager for ""
	I0122 20:02:24.718681  255382 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 20:02:24.718691  255382 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 20:02:24.718748  255382 start.go:340] cluster config:
	{Name:addons-772234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-772234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0122 20:02:24.718922  255382 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:02:24.721274  255382 out.go:177] * Starting "addons-772234" primary control-plane node in "addons-772234" cluster
	I0122 20:02:24.722847  255382 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 20:02:24.722926  255382 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0122 20:02:24.722940  255382 cache.go:56] Caching tarball of preloaded images
	I0122 20:02:24.723085  255382 preload.go:172] Found /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0122 20:02:24.723098  255382 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0122 20:02:24.723440  255382 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/config.json ...
	I0122 20:02:24.723465  255382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/config.json: {Name:mk6220581f9fd04e40b84492b46fea34a8e0d53a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:24.723662  255382 start.go:360] acquireMachinesLock for addons-772234: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 20:02:24.723723  255382 start.go:364] duration metric: took 42.737µs to acquireMachinesLock for "addons-772234"
	I0122 20:02:24.723745  255382 start.go:93] Provisioning new machine with config: &{Name:addons-772234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-772234 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 20:02:24.723810  255382 start.go:125] createHost starting for "" (driver="kvm2")
	I0122 20:02:24.726635  255382 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0122 20:02:24.726875  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:02:24.726961  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:02:24.743715  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0122 20:02:24.744212  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:02:24.744878  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:02:24.744902  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:02:24.745306  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:02:24.745571  255382 main.go:141] libmachine: (addons-772234) Calling .GetMachineName
	I0122 20:02:24.745765  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:02:24.745998  255382 start.go:159] libmachine.API.Create for "addons-772234" (driver="kvm2")
	I0122 20:02:24.746064  255382 client.go:168] LocalClient.Create starting
	I0122 20:02:24.746114  255382 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem
	I0122 20:02:24.884273  255382 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem
	I0122 20:02:24.933754  255382 main.go:141] libmachine: Running pre-create checks...
	I0122 20:02:24.933787  255382 main.go:141] libmachine: (addons-772234) Calling .PreCreateCheck
	I0122 20:02:24.934436  255382 main.go:141] libmachine: (addons-772234) Calling .GetConfigRaw
	I0122 20:02:24.934982  255382 main.go:141] libmachine: Creating machine...
	I0122 20:02:24.934998  255382 main.go:141] libmachine: (addons-772234) Calling .Create
	I0122 20:02:24.935218  255382 main.go:141] libmachine: (addons-772234) creating KVM machine...
	I0122 20:02:24.935244  255382 main.go:141] libmachine: (addons-772234) creating network...
	I0122 20:02:24.936646  255382 main.go:141] libmachine: (addons-772234) DBG | found existing default KVM network
	I0122 20:02:24.937504  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:24.937268  255404 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I0122 20:02:24.937538  255382 main.go:141] libmachine: (addons-772234) DBG | created network xml: 
	I0122 20:02:24.937551  255382 main.go:141] libmachine: (addons-772234) DBG | <network>
	I0122 20:02:24.937558  255382 main.go:141] libmachine: (addons-772234) DBG |   <name>mk-addons-772234</name>
	I0122 20:02:24.937566  255382 main.go:141] libmachine: (addons-772234) DBG |   <dns enable='no'/>
	I0122 20:02:24.937571  255382 main.go:141] libmachine: (addons-772234) DBG |   
	I0122 20:02:24.937580  255382 main.go:141] libmachine: (addons-772234) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0122 20:02:24.937588  255382 main.go:141] libmachine: (addons-772234) DBG |     <dhcp>
	I0122 20:02:24.937601  255382 main.go:141] libmachine: (addons-772234) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0122 20:02:24.937612  255382 main.go:141] libmachine: (addons-772234) DBG |     </dhcp>
	I0122 20:02:24.937642  255382 main.go:141] libmachine: (addons-772234) DBG |   </ip>
	I0122 20:02:24.937672  255382 main.go:141] libmachine: (addons-772234) DBG |   
	I0122 20:02:24.937678  255382 main.go:141] libmachine: (addons-772234) DBG | </network>
	I0122 20:02:24.937688  255382 main.go:141] libmachine: (addons-772234) DBG | 
	I0122 20:02:24.944155  255382 main.go:141] libmachine: (addons-772234) DBG | trying to create private KVM network mk-addons-772234 192.168.39.0/24...
	I0122 20:02:25.031640  255382 main.go:141] libmachine: (addons-772234) DBG | private KVM network mk-addons-772234 192.168.39.0/24 created
	I0122 20:02:25.031677  255382 main.go:141] libmachine: (addons-772234) setting up store path in /home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234 ...
	I0122 20:02:25.031686  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:25.031640  255404 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 20:02:25.031707  255382 main.go:141] libmachine: (addons-772234) building disk image from file:///home/jenkins/minikube-integration/20288-247142/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0122 20:02:25.031835  255382 main.go:141] libmachine: (addons-772234) Downloading /home/jenkins/minikube-integration/20288-247142/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20288-247142/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0122 20:02:25.347951  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:25.347792  255404 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa...
	I0122 20:02:25.559126  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:25.558909  255404 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/addons-772234.rawdisk...
	I0122 20:02:25.559170  255382 main.go:141] libmachine: (addons-772234) DBG | Writing magic tar header
	I0122 20:02:25.559188  255382 main.go:141] libmachine: (addons-772234) DBG | Writing SSH key tar header
	I0122 20:02:25.559201  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:25.559087  255404 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234 ...
	I0122 20:02:25.559268  255382 main.go:141] libmachine: (addons-772234) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234
	I0122 20:02:25.559291  255382 main.go:141] libmachine: (addons-772234) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142/.minikube/machines
	I0122 20:02:25.559300  255382 main.go:141] libmachine: (addons-772234) setting executable bit set on /home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234 (perms=drwx------)
	I0122 20:02:25.559313  255382 main.go:141] libmachine: (addons-772234) setting executable bit set on /home/jenkins/minikube-integration/20288-247142/.minikube/machines (perms=drwxr-xr-x)
	I0122 20:02:25.559319  255382 main.go:141] libmachine: (addons-772234) setting executable bit set on /home/jenkins/minikube-integration/20288-247142/.minikube (perms=drwxr-xr-x)
	I0122 20:02:25.559326  255382 main.go:141] libmachine: (addons-772234) setting executable bit set on /home/jenkins/minikube-integration/20288-247142 (perms=drwxrwxr-x)
	I0122 20:02:25.559331  255382 main.go:141] libmachine: (addons-772234) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0122 20:02:25.559340  255382 main.go:141] libmachine: (addons-772234) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0122 20:02:25.559345  255382 main.go:141] libmachine: (addons-772234) creating domain...
	I0122 20:02:25.559355  255382 main.go:141] libmachine: (addons-772234) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 20:02:25.559361  255382 main.go:141] libmachine: (addons-772234) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142
	I0122 20:02:25.559398  255382 main.go:141] libmachine: (addons-772234) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0122 20:02:25.559423  255382 main.go:141] libmachine: (addons-772234) DBG | checking permissions on dir: /home/jenkins
	I0122 20:02:25.559444  255382 main.go:141] libmachine: (addons-772234) DBG | checking permissions on dir: /home
	I0122 20:02:25.559456  255382 main.go:141] libmachine: (addons-772234) DBG | skipping /home - not owner
	I0122 20:02:25.560727  255382 main.go:141] libmachine: (addons-772234) define libvirt domain using xml: 
	I0122 20:02:25.560753  255382 main.go:141] libmachine: (addons-772234) <domain type='kvm'>
	I0122 20:02:25.560781  255382 main.go:141] libmachine: (addons-772234)   <name>addons-772234</name>
	I0122 20:02:25.560790  255382 main.go:141] libmachine: (addons-772234)   <memory unit='MiB'>4000</memory>
	I0122 20:02:25.560799  255382 main.go:141] libmachine: (addons-772234)   <vcpu>2</vcpu>
	I0122 20:02:25.560809  255382 main.go:141] libmachine: (addons-772234)   <features>
	I0122 20:02:25.560816  255382 main.go:141] libmachine: (addons-772234)     <acpi/>
	I0122 20:02:25.560824  255382 main.go:141] libmachine: (addons-772234)     <apic/>
	I0122 20:02:25.560833  255382 main.go:141] libmachine: (addons-772234)     <pae/>
	I0122 20:02:25.560840  255382 main.go:141] libmachine: (addons-772234)     
	I0122 20:02:25.560847  255382 main.go:141] libmachine: (addons-772234)   </features>
	I0122 20:02:25.560855  255382 main.go:141] libmachine: (addons-772234)   <cpu mode='host-passthrough'>
	I0122 20:02:25.560867  255382 main.go:141] libmachine: (addons-772234)   
	I0122 20:02:25.560874  255382 main.go:141] libmachine: (addons-772234)   </cpu>
	I0122 20:02:25.560881  255382 main.go:141] libmachine: (addons-772234)   <os>
	I0122 20:02:25.560889  255382 main.go:141] libmachine: (addons-772234)     <type>hvm</type>
	I0122 20:02:25.560916  255382 main.go:141] libmachine: (addons-772234)     <boot dev='cdrom'/>
	I0122 20:02:25.560931  255382 main.go:141] libmachine: (addons-772234)     <boot dev='hd'/>
	I0122 20:02:25.560937  255382 main.go:141] libmachine: (addons-772234)     <bootmenu enable='no'/>
	I0122 20:02:25.560941  255382 main.go:141] libmachine: (addons-772234)   </os>
	I0122 20:02:25.560949  255382 main.go:141] libmachine: (addons-772234)   <devices>
	I0122 20:02:25.560954  255382 main.go:141] libmachine: (addons-772234)     <disk type='file' device='cdrom'>
	I0122 20:02:25.560965  255382 main.go:141] libmachine: (addons-772234)       <source file='/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/boot2docker.iso'/>
	I0122 20:02:25.560970  255382 main.go:141] libmachine: (addons-772234)       <target dev='hdc' bus='scsi'/>
	I0122 20:02:25.560981  255382 main.go:141] libmachine: (addons-772234)       <readonly/>
	I0122 20:02:25.560985  255382 main.go:141] libmachine: (addons-772234)     </disk>
	I0122 20:02:25.560997  255382 main.go:141] libmachine: (addons-772234)     <disk type='file' device='disk'>
	I0122 20:02:25.561007  255382 main.go:141] libmachine: (addons-772234)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0122 20:02:25.561015  255382 main.go:141] libmachine: (addons-772234)       <source file='/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/addons-772234.rawdisk'/>
	I0122 20:02:25.561021  255382 main.go:141] libmachine: (addons-772234)       <target dev='hda' bus='virtio'/>
	I0122 20:02:25.561055  255382 main.go:141] libmachine: (addons-772234)     </disk>
	I0122 20:02:25.561081  255382 main.go:141] libmachine: (addons-772234)     <interface type='network'>
	I0122 20:02:25.561092  255382 main.go:141] libmachine: (addons-772234)       <source network='mk-addons-772234'/>
	I0122 20:02:25.561117  255382 main.go:141] libmachine: (addons-772234)       <model type='virtio'/>
	I0122 20:02:25.561136  255382 main.go:141] libmachine: (addons-772234)     </interface>
	I0122 20:02:25.561147  255382 main.go:141] libmachine: (addons-772234)     <interface type='network'>
	I0122 20:02:25.561157  255382 main.go:141] libmachine: (addons-772234)       <source network='default'/>
	I0122 20:02:25.561171  255382 main.go:141] libmachine: (addons-772234)       <model type='virtio'/>
	I0122 20:02:25.561182  255382 main.go:141] libmachine: (addons-772234)     </interface>
	I0122 20:02:25.561190  255382 main.go:141] libmachine: (addons-772234)     <serial type='pty'>
	I0122 20:02:25.561200  255382 main.go:141] libmachine: (addons-772234)       <target port='0'/>
	I0122 20:02:25.561210  255382 main.go:141] libmachine: (addons-772234)     </serial>
	I0122 20:02:25.561226  255382 main.go:141] libmachine: (addons-772234)     <console type='pty'>
	I0122 20:02:25.561236  255382 main.go:141] libmachine: (addons-772234)       <target type='serial' port='0'/>
	I0122 20:02:25.561250  255382 main.go:141] libmachine: (addons-772234)     </console>
	I0122 20:02:25.561267  255382 main.go:141] libmachine: (addons-772234)     <rng model='virtio'>
	I0122 20:02:25.561280  255382 main.go:141] libmachine: (addons-772234)       <backend model='random'>/dev/random</backend>
	I0122 20:02:25.561287  255382 main.go:141] libmachine: (addons-772234)     </rng>
	I0122 20:02:25.561292  255382 main.go:141] libmachine: (addons-772234)     
	I0122 20:02:25.561296  255382 main.go:141] libmachine: (addons-772234)     
	I0122 20:02:25.561301  255382 main.go:141] libmachine: (addons-772234)   </devices>
	I0122 20:02:25.561307  255382 main.go:141] libmachine: (addons-772234) </domain>
	I0122 20:02:25.561314  255382 main.go:141] libmachine: (addons-772234) 
	I0122 20:02:25.566087  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:67:c7:34 in network default
	I0122 20:02:25.566753  255382 main.go:141] libmachine: (addons-772234) starting domain...
	I0122 20:02:25.566776  255382 main.go:141] libmachine: (addons-772234) ensuring networks are active...
	I0122 20:02:25.566784  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:25.567618  255382 main.go:141] libmachine: (addons-772234) Ensuring network default is active
	I0122 20:02:25.567994  255382 main.go:141] libmachine: (addons-772234) Ensuring network mk-addons-772234 is active
	I0122 20:02:25.568548  255382 main.go:141] libmachine: (addons-772234) getting domain XML...
	I0122 20:02:25.569497  255382 main.go:141] libmachine: (addons-772234) creating domain...
	I0122 20:02:26.979089  255382 main.go:141] libmachine: (addons-772234) waiting for IP...
	I0122 20:02:26.980004  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:26.981480  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:26.981506  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:26.981442  255404 retry.go:31] will retry after 270.793796ms: waiting for domain to come up
	I0122 20:02:27.254165  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:27.254719  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:27.254771  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:27.254696  255404 retry.go:31] will retry after 322.899688ms: waiting for domain to come up
	I0122 20:02:27.579544  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:27.580086  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:27.580138  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:27.580059  255404 retry.go:31] will retry after 346.34551ms: waiting for domain to come up
	I0122 20:02:27.928535  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:27.929058  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:27.929112  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:27.928998  255404 retry.go:31] will retry after 488.055023ms: waiting for domain to come up
	I0122 20:02:28.418980  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:28.419391  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:28.419455  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:28.419344  255404 retry.go:31] will retry after 566.464497ms: waiting for domain to come up
	I0122 20:02:28.987325  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:28.987993  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:28.988022  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:28.987936  255404 retry.go:31] will retry after 828.74878ms: waiting for domain to come up
	I0122 20:02:29.819065  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:29.819542  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:29.819576  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:29.819515  255404 retry.go:31] will retry after 765.174218ms: waiting for domain to come up
	I0122 20:02:30.586705  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:30.587244  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:30.587271  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:30.587216  255404 retry.go:31] will retry after 1.072348684s: waiting for domain to come up
	I0122 20:02:31.661595  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:31.662095  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:31.662138  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:31.662046  255404 retry.go:31] will retry after 1.453068732s: waiting for domain to come up
	I0122 20:02:33.117818  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:33.118232  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:33.118273  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:33.118225  255404 retry.go:31] will retry after 1.467981043s: waiting for domain to come up
	I0122 20:02:34.588183  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:34.588729  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:34.588751  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:34.588705  255404 retry.go:31] will retry after 1.929486804s: waiting for domain to come up
	I0122 20:02:36.520238  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:36.521079  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:36.521118  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:36.521002  255404 retry.go:31] will retry after 2.638034863s: waiting for domain to come up
	I0122 20:02:39.160660  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:39.161173  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:39.161206  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:39.161133  255404 retry.go:31] will retry after 3.483791739s: waiting for domain to come up
	I0122 20:02:42.648704  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:42.649168  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find current IP address of domain addons-772234 in network mk-addons-772234
	I0122 20:02:42.649245  255382 main.go:141] libmachine: (addons-772234) DBG | I0122 20:02:42.649174  255404 retry.go:31] will retry after 3.790723431s: waiting for domain to come up
	I0122 20:02:46.441863  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:46.442346  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has current primary IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:46.442370  255382 main.go:141] libmachine: (addons-772234) found domain IP: 192.168.39.58
	I0122 20:02:46.442382  255382 main.go:141] libmachine: (addons-772234) reserving static IP address...
	I0122 20:02:46.442768  255382 main.go:141] libmachine: (addons-772234) DBG | unable to find host DHCP lease matching {name: "addons-772234", mac: "52:54:00:37:16:89", ip: "192.168.39.58"} in network mk-addons-772234
	I0122 20:02:46.586392  255382 main.go:141] libmachine: (addons-772234) DBG | Getting to WaitForSSH function...
	I0122 20:02:46.586435  255382 main.go:141] libmachine: (addons-772234) reserved static IP address 192.168.39.58 for domain addons-772234
	I0122 20:02:46.586450  255382 main.go:141] libmachine: (addons-772234) waiting for SSH...
	I0122 20:02:46.589689  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:46.590370  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:37:16:89}
	I0122 20:02:46.590412  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:46.590620  255382 main.go:141] libmachine: (addons-772234) DBG | Using SSH client type: external
	I0122 20:02:46.590653  255382 main.go:141] libmachine: (addons-772234) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa (-rw-------)
	I0122 20:02:46.590704  255382 main.go:141] libmachine: (addons-772234) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 20:02:46.590723  255382 main.go:141] libmachine: (addons-772234) DBG | About to run SSH command:
	I0122 20:02:46.590736  255382 main.go:141] libmachine: (addons-772234) DBG | exit 0
	I0122 20:02:46.726848  255382 main.go:141] libmachine: (addons-772234) DBG | SSH cmd err, output: <nil>: 
	I0122 20:02:46.727230  255382 main.go:141] libmachine: (addons-772234) KVM machine creation complete
	I0122 20:02:46.727665  255382 main.go:141] libmachine: (addons-772234) Calling .GetConfigRaw
	I0122 20:02:46.728306  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:02:46.728533  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:02:46.728676  255382 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0122 20:02:46.728689  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:02:46.730218  255382 main.go:141] libmachine: Detecting operating system of created instance...
	I0122 20:02:46.730242  255382 main.go:141] libmachine: Waiting for SSH to be available...
	I0122 20:02:46.730250  255382 main.go:141] libmachine: Getting to WaitForSSH function...
	I0122 20:02:46.730259  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:02:46.732939  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:46.733306  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:46.733328  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:46.733593  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:02:46.733841  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:46.734036  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:46.734217  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:02:46.734416  255382 main.go:141] libmachine: Using SSH client type: native
	I0122 20:02:46.734634  255382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0122 20:02:46.734647  255382 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0122 20:02:46.846069  255382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 20:02:46.846111  255382 main.go:141] libmachine: Detecting the provisioner...
	I0122 20:02:46.846123  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:02:46.849413  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:46.849789  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:46.849824  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:46.850018  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:02:46.850359  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:46.850580  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:46.850782  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:02:46.851073  255382 main.go:141] libmachine: Using SSH client type: native
	I0122 20:02:46.851267  255382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0122 20:02:46.851278  255382 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0122 20:02:46.963782  255382 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0122 20:02:46.963889  255382 main.go:141] libmachine: found compatible host: buildroot
	I0122 20:02:46.963907  255382 main.go:141] libmachine: Provisioning with buildroot...
	I0122 20:02:46.963921  255382 main.go:141] libmachine: (addons-772234) Calling .GetMachineName
	I0122 20:02:46.964229  255382 buildroot.go:166] provisioning hostname "addons-772234"
	I0122 20:02:46.964266  255382 main.go:141] libmachine: (addons-772234) Calling .GetMachineName
	I0122 20:02:46.964462  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:02:46.967394  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:46.967855  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:46.967888  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:46.968140  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:02:46.968389  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:46.968567  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:46.968745  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:02:46.968901  255382 main.go:141] libmachine: Using SSH client type: native
	I0122 20:02:46.969110  255382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0122 20:02:46.969123  255382 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-772234 && echo "addons-772234" | sudo tee /etc/hostname
	I0122 20:02:47.099981  255382 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-772234
	
	I0122 20:02:47.100015  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:02:47.103470  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.104063  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:47.104097  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.104355  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:02:47.104584  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:47.104803  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:47.104978  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:02:47.105382  255382 main.go:141] libmachine: Using SSH client type: native
	I0122 20:02:47.105670  255382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0122 20:02:47.105691  255382 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-772234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-772234/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-772234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 20:02:47.229967  255382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 20:02:47.230002  255382 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 20:02:47.230045  255382 buildroot.go:174] setting up certificates
	I0122 20:02:47.230074  255382 provision.go:84] configureAuth start
	I0122 20:02:47.230090  255382 main.go:141] libmachine: (addons-772234) Calling .GetMachineName
	I0122 20:02:47.230484  255382 main.go:141] libmachine: (addons-772234) Calling .GetIP
	I0122 20:02:47.233774  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.234168  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:47.234243  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.234524  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:02:47.237500  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.238089  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:47.238127  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.238393  255382 provision.go:143] copyHostCerts
	I0122 20:02:47.238505  255382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 20:02:47.238655  255382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 20:02:47.238737  255382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 20:02:47.238815  255382 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.addons-772234 san=[127.0.0.1 192.168.39.58 addons-772234 localhost minikube]
	I0122 20:02:47.366052  255382 provision.go:177] copyRemoteCerts
	I0122 20:02:47.366162  255382 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 20:02:47.366214  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:02:47.369628  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.370050  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:47.370084  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.370599  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:02:47.370849  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:47.371085  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:02:47.371275  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:02:47.458292  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 20:02:47.487530  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 20:02:47.516477  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0122 20:02:47.545542  255382 provision.go:87] duration metric: took 315.450331ms to configureAuth
	I0122 20:02:47.545582  255382 buildroot.go:189] setting minikube options for container-runtime
	I0122 20:02:47.545812  255382 config.go:182] Loaded profile config "addons-772234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 20:02:47.545939  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:02:47.548962  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.549319  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:47.549350  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.549624  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:02:47.549872  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:47.550061  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:47.550249  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:02:47.550423  255382 main.go:141] libmachine: Using SSH client type: native
	I0122 20:02:47.550628  255382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0122 20:02:47.550645  255382 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 20:02:47.802543  255382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 20:02:47.802595  255382 main.go:141] libmachine: Checking connection to Docker...
	I0122 20:02:47.802610  255382 main.go:141] libmachine: (addons-772234) Calling .GetURL
	I0122 20:02:47.803877  255382 main.go:141] libmachine: (addons-772234) DBG | using libvirt version 6000000
	I0122 20:02:47.806041  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.806382  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:47.806411  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.806603  255382 main.go:141] libmachine: Docker is up and running!
	I0122 20:02:47.806617  255382 main.go:141] libmachine: Reticulating splines...
	I0122 20:02:47.806626  255382 client.go:171] duration metric: took 23.06055055s to LocalClient.Create
	I0122 20:02:47.806654  255382 start.go:167] duration metric: took 23.060657908s to libmachine.API.Create "addons-772234"
	I0122 20:02:47.806692  255382 start.go:293] postStartSetup for "addons-772234" (driver="kvm2")
	I0122 20:02:47.806709  255382 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 20:02:47.806734  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:02:47.807053  255382 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 20:02:47.807085  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:02:47.809295  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.809547  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:47.809571  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.809751  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:02:47.809980  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:47.810153  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:02:47.810345  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:02:47.902706  255382 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 20:02:47.907997  255382 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 20:02:47.908046  255382 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 20:02:47.908146  255382 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 20:02:47.908184  255382 start.go:296] duration metric: took 101.481095ms for postStartSetup
	I0122 20:02:47.908256  255382 main.go:141] libmachine: (addons-772234) Calling .GetConfigRaw
	I0122 20:02:47.908942  255382 main.go:141] libmachine: (addons-772234) Calling .GetIP
	I0122 20:02:47.911861  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.912243  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:47.912271  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.912591  255382 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/config.json ...
	I0122 20:02:47.912853  255382 start.go:128] duration metric: took 23.18902692s to createHost
	I0122 20:02:47.912885  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:02:47.915658  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.916044  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:47.916095  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:47.916248  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:02:47.916502  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:47.916688  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:47.916867  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:02:47.917122  255382 main.go:141] libmachine: Using SSH client type: native
	I0122 20:02:47.917320  255382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0122 20:02:47.917330  255382 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 20:02:48.027640  255382 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737576167.994139499
	
	I0122 20:02:48.027672  255382 fix.go:216] guest clock: 1737576167.994139499
	I0122 20:02:48.027683  255382 fix.go:229] Guest: 2025-01-22 20:02:47.994139499 +0000 UTC Remote: 2025-01-22 20:02:47.912869488 +0000 UTC m=+23.319325164 (delta=81.270011ms)
	I0122 20:02:48.027742  255382 fix.go:200] guest clock delta is within tolerance: 81.270011ms
	I0122 20:02:48.027748  255382 start.go:83] releasing machines lock for "addons-772234", held for 23.304014235s
	I0122 20:02:48.027797  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:02:48.028138  255382 main.go:141] libmachine: (addons-772234) Calling .GetIP
	I0122 20:02:48.031253  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:48.031676  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:48.031714  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:48.031941  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:02:48.032762  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:02:48.033034  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:02:48.033167  255382 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 20:02:48.033239  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:02:48.033282  255382 ssh_runner.go:195] Run: cat /version.json
	I0122 20:02:48.033313  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:02:48.036259  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:48.036524  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:48.036579  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:48.036618  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:48.036821  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:02:48.037088  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:48.037123  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:48.037149  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:48.037325  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:02:48.037330  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:02:48.037562  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:02:48.037587  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:02:48.037714  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:02:48.037871  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:02:48.120197  255382 ssh_runner.go:195] Run: systemctl --version
	I0122 20:02:48.142925  255382 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 20:02:48.317221  255382 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 20:02:48.324225  255382 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 20:02:48.324322  255382 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 20:02:48.345352  255382 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 20:02:48.345392  255382 start.go:495] detecting cgroup driver to use...
	I0122 20:02:48.345487  255382 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 20:02:48.364631  255382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 20:02:48.382176  255382 docker.go:217] disabling cri-docker service (if available) ...
	I0122 20:02:48.382268  255382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 20:02:48.399160  255382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 20:02:48.416226  255382 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 20:02:48.547426  255382 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 20:02:48.712121  255382 docker.go:233] disabling docker service ...
	I0122 20:02:48.712203  255382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 20:02:48.729294  255382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 20:02:48.744906  255382 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 20:02:48.910439  255382 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 20:02:49.043657  255382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 20:02:49.060476  255382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 20:02:49.083013  255382 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0122 20:02:49.083087  255382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 20:02:49.096037  255382 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 20:02:49.096122  255382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 20:02:49.109227  255382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 20:02:49.122436  255382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 20:02:49.135477  255382 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 20:02:49.148836  255382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 20:02:49.161994  255382 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 20:02:49.182817  255382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 20:02:49.195689  255382 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 20:02:49.207525  255382 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 20:02:49.207618  255382 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 20:02:49.224291  255382 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 20:02:49.236085  255382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 20:02:49.384797  255382 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 20:02:49.496549  255382 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 20:02:49.496663  255382 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 20:02:49.502538  255382 start.go:563] Will wait 60s for crictl version
	I0122 20:02:49.502639  255382 ssh_runner.go:195] Run: which crictl
	I0122 20:02:49.507312  255382 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 20:02:49.556792  255382 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 20:02:49.556983  255382 ssh_runner.go:195] Run: crio --version
	I0122 20:02:49.589261  255382 ssh_runner.go:195] Run: crio --version
	I0122 20:02:49.624871  255382 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0122 20:02:49.626228  255382 main.go:141] libmachine: (addons-772234) Calling .GetIP
	I0122 20:02:49.628971  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:49.629323  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:02:49.629358  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:02:49.629617  255382 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0122 20:02:49.634434  255382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 20:02:49.650543  255382 kubeadm.go:883] updating cluster {Name:addons-772234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-772234 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 20:02:49.650706  255382 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 20:02:49.650770  255382 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 20:02:49.694047  255382 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0122 20:02:49.694129  255382 ssh_runner.go:195] Run: which lz4
	I0122 20:02:49.698840  255382 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 20:02:49.703679  255382 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 20:02:49.703715  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0122 20:02:51.343842  255382 crio.go:462] duration metric: took 1.645037976s to copy over tarball
	I0122 20:02:51.343941  255382 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 20:02:53.950231  255382 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.606250772s)
	I0122 20:02:53.950268  255382 crio.go:469] duration metric: took 2.606391542s to extract the tarball
	I0122 20:02:53.950276  255382 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 20:02:53.990402  255382 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 20:02:54.050105  255382 crio.go:514] all images are preloaded for cri-o runtime.
	I0122 20:02:54.050139  255382 cache_images.go:84] Images are preloaded, skipping loading
	I0122 20:02:54.050152  255382 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.32.1 crio true true} ...
	I0122 20:02:54.050293  255382 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-772234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-772234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 20:02:54.050376  255382 ssh_runner.go:195] Run: crio config
	I0122 20:02:54.113017  255382 cni.go:84] Creating CNI manager for ""
	I0122 20:02:54.113049  255382 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 20:02:54.113063  255382 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 20:02:54.113090  255382 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-772234 NodeName:addons-772234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 20:02:54.113264  255382 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-772234"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 20:02:54.113340  255382 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 20:02:54.125794  255382 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 20:02:54.125889  255382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 20:02:54.137706  255382 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0122 20:02:54.159877  255382 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 20:02:54.181440  255382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0122 20:02:54.203639  255382 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0122 20:02:54.208459  255382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 20:02:54.224067  255382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 20:02:54.377355  255382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 20:02:54.399332  255382 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234 for IP: 192.168.39.58
	I0122 20:02:54.399364  255382 certs.go:194] generating shared ca certs ...
	I0122 20:02:54.399383  255382 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:54.400364  255382 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 20:02:54.542955  255382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt ...
	I0122 20:02:54.542994  255382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt: {Name:mk04d29f6518a83d0f0ea89b5b3dbdcfe1570252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:54.544013  255382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key ...
	I0122 20:02:54.544039  255382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key: {Name:mkdbc8dc16a717887ecb58d067001b6a04b4b93b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:54.544143  255382 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 20:02:54.663638  255382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt ...
	I0122 20:02:54.663675  255382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt: {Name:mk82c200e734845dfb02645416e6f2139f5c4157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:54.740551  255382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key ...
	I0122 20:02:54.740599  255382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key: {Name:mk6c5478c08014505c273ce8e4967c489162317d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:54.741375  255382 certs.go:256] generating profile certs ...
	I0122 20:02:54.741492  255382 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.key
	I0122 20:02:54.741556  255382 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt with IP's: []
	I0122 20:02:54.976365  255382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt ...
	I0122 20:02:54.976420  255382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: {Name:mkad5ef5a1e3d7b47ff8754bea6b77a1ac549e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:54.977331  255382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.key ...
	I0122 20:02:54.977368  255382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.key: {Name:mkbb37bd0fa6c39ea768d0a1a677ce7338194f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:54.978141  255382 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/apiserver.key.68e8b77a
	I0122 20:02:54.978176  255382 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/apiserver.crt.68e8b77a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.58]
	I0122 20:02:55.061414  255382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/apiserver.crt.68e8b77a ...
	I0122 20:02:55.061454  255382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/apiserver.crt.68e8b77a: {Name:mkc29bc915f5b564bc334f522ac3b83153b4b866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:55.062318  255382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/apiserver.key.68e8b77a ...
	I0122 20:02:55.062346  255382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/apiserver.key.68e8b77a: {Name:mk3c13bd911b6da8abb41b7b77e6088f0ae55ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:55.062860  255382 certs.go:381] copying /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/apiserver.crt.68e8b77a -> /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/apiserver.crt
	I0122 20:02:55.062982  255382 certs.go:385] copying /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/apiserver.key.68e8b77a -> /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/apiserver.key
	I0122 20:02:55.063064  255382 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/proxy-client.key
	I0122 20:02:55.063094  255382 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/proxy-client.crt with IP's: []
	I0122 20:02:55.118764  255382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/proxy-client.crt ...
	I0122 20:02:55.118806  255382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/proxy-client.crt: {Name:mk6ae08ac935afb616a90da31876f874bc8d9240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:55.119736  255382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/proxy-client.key ...
	I0122 20:02:55.119772  255382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/proxy-client.key: {Name:mk39382d88e94b1c16f5abbdc39cabe660a45645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:55.120704  255382 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 20:02:55.120759  255382 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 20:02:55.120780  255382 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 20:02:55.120858  255382 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 20:02:55.121584  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 20:02:55.162479  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 20:02:55.200081  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 20:02:55.238596  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 20:02:55.270854  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0122 20:02:55.301963  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0122 20:02:55.333352  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 20:02:55.364188  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0122 20:02:55.397672  255382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 20:02:55.429356  255382 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 20:02:55.453486  255382 ssh_runner.go:195] Run: openssl version
	I0122 20:02:55.461061  255382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 20:02:55.474864  255382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 20:02:55.482136  255382 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 20:02:55.482269  255382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 20:02:55.490916  255382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 20:02:55.504831  255382 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 20:02:55.510697  255382 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0122 20:02:55.510770  255382 kubeadm.go:392] StartCluster: {Name:addons-772234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-772234 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 20:02:55.510888  255382 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 20:02:55.510962  255382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 20:02:55.561193  255382 cri.go:89] found id: ""
	I0122 20:02:55.561280  255382 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 20:02:55.574605  255382 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 20:02:55.587684  255382 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 20:02:55.603009  255382 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 20:02:55.603043  255382 kubeadm.go:157] found existing configuration files:
	
	I0122 20:02:55.603114  255382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 20:02:55.614798  255382 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 20:02:55.614877  255382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 20:02:55.626746  255382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 20:02:55.638275  255382 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 20:02:55.638357  255382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 20:02:55.650341  255382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 20:02:55.662641  255382 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 20:02:55.662739  255382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 20:02:55.675176  255382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 20:02:55.688181  255382 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 20:02:55.688258  255382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 20:02:55.701554  255382 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 20:02:55.770156  255382 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0122 20:02:55.770300  255382 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 20:02:55.910935  255382 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 20:02:55.911067  255382 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 20:02:55.911139  255382 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0122 20:02:55.922535  255382 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 20:02:56.016330  255382 out.go:235]   - Generating certificates and keys ...
	I0122 20:02:56.016476  255382 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 20:02:56.016575  255382 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 20:02:56.126694  255382 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0122 20:02:56.279906  255382 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0122 20:02:56.544797  255382 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0122 20:02:56.687789  255382 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0122 20:02:56.761491  255382 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0122 20:02:56.761705  255382 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-772234 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I0122 20:02:57.257129  255382 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0122 20:02:57.257266  255382 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-772234 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I0122 20:02:57.447925  255382 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0122 20:02:57.650022  255382 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0122 20:02:57.823132  255382 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0122 20:02:57.823300  255382 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 20:02:58.192276  255382 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 20:02:58.302671  255382 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0122 20:02:58.377960  255382 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 20:02:58.585900  255382 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 20:02:58.896411  255382 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 20:02:58.897122  255382 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 20:02:58.899827  255382 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 20:02:58.935365  255382 out.go:235]   - Booting up control plane ...
	I0122 20:02:58.935569  255382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 20:02:58.935716  255382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 20:02:58.935836  255382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 20:02:58.936012  255382 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 20:02:58.936162  255382 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 20:02:58.936249  255382 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 20:02:59.076980  255382 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0122 20:02:59.077107  255382 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0122 20:03:00.077262  255382 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001228535s
	I0122 20:03:00.077423  255382 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0122 20:03:05.579215  255382 kubeadm.go:310] [api-check] The API server is healthy after 5.505802314s
	I0122 20:03:05.593783  255382 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0122 20:03:05.622878  255382 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0122 20:03:05.659476  255382 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0122 20:03:05.659718  255382 kubeadm.go:310] [mark-control-plane] Marking the node addons-772234 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0122 20:03:05.674933  255382 kubeadm.go:310] [bootstrap-token] Using token: kh0spc.wirft1azdtr894ve
	I0122 20:03:05.676831  255382 out.go:235]   - Configuring RBAC rules ...
	I0122 20:03:05.677010  255382 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0122 20:03:05.686709  255382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0122 20:03:05.700836  255382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0122 20:03:05.711835  255382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0122 20:03:05.717092  255382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0122 20:03:05.726271  255382 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0122 20:03:05.987427  255382 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0122 20:03:06.466334  255382 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0122 20:03:06.986651  255382 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0122 20:03:06.986678  255382 kubeadm.go:310] 
	I0122 20:03:06.986728  255382 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0122 20:03:06.986771  255382 kubeadm.go:310] 
	I0122 20:03:06.986884  255382 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0122 20:03:06.986894  255382 kubeadm.go:310] 
	I0122 20:03:06.986929  255382 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0122 20:03:06.986997  255382 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0122 20:03:06.987067  255382 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0122 20:03:06.987077  255382 kubeadm.go:310] 
	I0122 20:03:06.987149  255382 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0122 20:03:06.987159  255382 kubeadm.go:310] 
	I0122 20:03:06.987229  255382 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0122 20:03:06.987239  255382 kubeadm.go:310] 
	I0122 20:03:06.987313  255382 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0122 20:03:06.987409  255382 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0122 20:03:06.987510  255382 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0122 20:03:06.987519  255382 kubeadm.go:310] 
	I0122 20:03:06.987625  255382 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0122 20:03:06.987727  255382 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0122 20:03:06.987738  255382 kubeadm.go:310] 
	I0122 20:03:06.987892  255382 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kh0spc.wirft1azdtr894ve \
	I0122 20:03:06.988067  255382 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e447fe88d4e43aa7dedab9e7f78d5319a1771f66f483469eded588e9e0904b1d \
	I0122 20:03:06.988108  255382 kubeadm.go:310] 	--control-plane 
	I0122 20:03:06.988118  255382 kubeadm.go:310] 
	I0122 20:03:06.988217  255382 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0122 20:03:06.988230  255382 kubeadm.go:310] 
	I0122 20:03:06.988297  255382 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kh0spc.wirft1azdtr894ve \
	I0122 20:03:06.988378  255382 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e447fe88d4e43aa7dedab9e7f78d5319a1771f66f483469eded588e9e0904b1d 
	I0122 20:03:06.989126  255382 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 20:03:06.989225  255382 cni.go:84] Creating CNI manager for ""
	I0122 20:03:06.989239  255382 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 20:03:06.991082  255382 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 20:03:06.992326  255382 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 20:03:07.004727  255382 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 20:03:07.029123  255382 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 20:03:07.029271  255382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-772234 minikube.k8s.io/updated_at=2025_01_22T20_03_07_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4 minikube.k8s.io/name=addons-772234 minikube.k8s.io/primary=true
	I0122 20:03:07.029274  255382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 20:03:07.060648  255382 ops.go:34] apiserver oom_adj: -16
	I0122 20:03:07.169276  255382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 20:03:07.669920  255382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 20:03:08.169411  255382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 20:03:08.670340  255382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 20:03:09.170128  255382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 20:03:09.781363  255382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 20:03:10.169376  255382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 20:03:10.670356  255382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 20:03:11.169393  255382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 20:03:11.669838  255382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 20:03:11.768492  255382 kubeadm.go:1113] duration metric: took 4.739324728s to wait for elevateKubeSystemPrivileges
	I0122 20:03:11.768546  255382 kubeadm.go:394] duration metric: took 16.257786365s to StartCluster
	I0122 20:03:11.768576  255382 settings.go:142] acquiring lock: {Name:mkd1753661c2351dd9318eb8eab12d9164b6fe23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:03:11.768759  255382 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 20:03:11.769161  255382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:03:11.770330  255382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0122 20:03:11.770364  255382 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0122 20:03:11.770456  255382 addons.go:69] Setting yakd=true in profile "addons-772234"
	I0122 20:03:11.770465  255382 addons.go:69] Setting ingress-dns=true in profile "addons-772234"
	I0122 20:03:11.770485  255382 addons.go:238] Setting addon yakd=true in "addons-772234"
	I0122 20:03:11.770508  255382 addons.go:69] Setting registry=true in profile "addons-772234"
	I0122 20:03:11.770523  255382 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-772234"
	I0122 20:03:11.770529  255382 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-772234"
	I0122 20:03:11.770544  255382 addons.go:69] Setting volumesnapshots=true in profile "addons-772234"
	I0122 20:03:11.770548  255382 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-772234"
	I0122 20:03:11.770553  255382 addons.go:238] Setting addon registry=true in "addons-772234"
	I0122 20:03:11.770557  255382 addons.go:238] Setting addon volumesnapshots=true in "addons-772234"
	I0122 20:03:11.770574  255382 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-772234"
	I0122 20:03:11.770584  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.770588  255382 addons.go:69] Setting storage-provisioner=true in profile "addons-772234"
	I0122 20:03:11.770329  255382 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 20:03:11.770600  255382 addons.go:238] Setting addon storage-provisioner=true in "addons-772234"
	I0122 20:03:11.770598  255382 config.go:182] Loaded profile config "addons-772234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 20:03:11.770617  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.770628  255382 addons.go:69] Setting metrics-server=true in profile "addons-772234"
	I0122 20:03:11.770654  255382 addons.go:69] Setting gcp-auth=true in profile "addons-772234"
	I0122 20:03:11.770660  255382 addons.go:238] Setting addon metrics-server=true in "addons-772234"
	I0122 20:03:11.770672  255382 mustload.go:65] Loading cluster: addons-772234
	I0122 20:03:11.770664  255382 addons.go:69] Setting default-storageclass=true in profile "addons-772234"
	I0122 20:03:11.770694  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.770695  255382 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-772234"
	I0122 20:03:11.770688  255382 addons.go:69] Setting ingress=true in profile "addons-772234"
	I0122 20:03:11.770715  255382 addons.go:238] Setting addon ingress=true in "addons-772234"
	I0122 20:03:11.770753  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.770826  255382 config.go:182] Loaded profile config "addons-772234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 20:03:11.771108  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.770588  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.771130  255382 addons.go:69] Setting cloud-spanner=true in profile "addons-772234"
	I0122 20:03:11.770513  255382 addons.go:238] Setting addon ingress-dns=true in "addons-772234"
	I0122 20:03:11.771155  255382 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-772234"
	I0122 20:03:11.771162  255382 addons.go:69] Setting inspektor-gadget=true in profile "addons-772234"
	I0122 20:03:11.771177  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.771194  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.771252  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.771206  255382 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-772234"
	I0122 20:03:11.771284  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.771294  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.771307  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.770576  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.770531  255382 addons.go:69] Setting volcano=true in profile "addons-772234"
	I0122 20:03:11.771997  255382 addons.go:238] Setting addon volcano=true in "addons-772234"
	I0122 20:03:11.772033  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.771184  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.771159  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.771113  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.770522  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.772394  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.772434  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.771118  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.772518  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.771145  255382 addons.go:238] Setting addon cloud-spanner=true in "addons-772234"
	I0122 20:03:11.771161  255382 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-772234"
	I0122 20:03:11.772553  255382 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-772234"
	I0122 20:03:11.772575  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.772594  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.771185  255382 addons.go:238] Setting addon inspektor-gadget=true in "addons-772234"
	I0122 20:03:11.772675  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.772711  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.771208  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.771240  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.771476  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.772818  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.771626  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.771918  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.772871  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.773037  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.773271  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.773447  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.773476  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.773558  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.773614  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.773943  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.778718  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.778826  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.779583  255382 out.go:177] * Verifying Kubernetes components...
	I0122 20:03:11.781428  255382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 20:03:11.781617  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.793490  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45233
	I0122 20:03:11.793490  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0122 20:03:11.798385  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0122 20:03:11.802652  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36651
	I0122 20:03:11.802652  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I0122 20:03:11.802665  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I0122 20:03:11.803436  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.803494  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.803503  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.803592  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.803794  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.803993  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.804062  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.804272  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.804294  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.804510  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.804534  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.804679  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.804770  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.804793  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.804822  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.804840  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.804680  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.804991  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.805083  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.805391  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.805463  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.805765  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.805831  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.805890  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.806705  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.806760  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.815013  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.815099  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.818642  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.819444  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.819504  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.819635  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.819750  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41277
	I0122 20:03:11.820487  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.820588  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.820609  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.821221  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.821250  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.821321  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.821717  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.822311  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.822365  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.822844  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.822895  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.833989  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0122 20:03:11.834598  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.835291  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.835320  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.835607  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0122 20:03:11.836424  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.837217  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.837273  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.838237  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42671
	I0122 20:03:11.838662  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.839083  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.839664  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.839692  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.839870  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.839898  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.840321  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.840766  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.841378  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.841435  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.842292  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.842359  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.853999  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41545
	I0122 20:03:11.854085  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42969
	I0122 20:03:11.854858  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.854860  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.855534  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.855551  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.855561  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.855572  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.855971  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.855990  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.856160  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.856566  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.856604  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.857187  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0122 20:03:11.857264  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0122 20:03:11.860516  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.860572  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0122 20:03:11.860651  255382 addons.go:238] Setting addon default-storageclass=true in "addons-772234"
	I0122 20:03:11.860696  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.861094  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.861150  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.861582  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.861712  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.862087  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.862584  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.862616  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.862745  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.862761  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.863194  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.863248  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.863303  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.863325  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.863384  255382 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0122 20:03:11.863890  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.863919  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.863894  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.864868  255382 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0122 20:03:11.864890  255382 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0122 20:03:11.864916  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.865119  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.865156  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.866842  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.867943  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.868439  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.868819  255382 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-772234"
	I0122 20:03:11.868891  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:11.869299  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.869359  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.870167  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.870835  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.870885  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.871264  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.871551  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.871767  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.871926  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.874635  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I0122 20:03:11.875105  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.875677  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.875697  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.876161  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.876392  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.876684  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38749
	I0122 20:03:11.877288  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.877863  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.877882  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.878591  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.878651  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.878852  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:11.878861  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:11.880719  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:11.880770  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:11.880775  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:11.880782  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:11.880786  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:11.881082  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.881840  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:11.881857  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	W0122 20:03:11.882007  255382 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0122 20:03:11.885058  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0122 20:03:11.886225  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.887210  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.887238  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.887323  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.888282  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.888640  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.889405  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I0122 20:03:11.889654  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45925
	I0122 20:03:11.890233  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.890352  255382 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0122 20:03:11.891029  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.891049  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.891144  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.892121  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.892197  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0122 20:03:11.892277  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.892292  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.892706  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0122 20:03:11.892931  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.892993  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.893166  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.893494  255382 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0122 20:03:11.893568  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.893759  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.893777  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.894430  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.894478  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.894557  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.894593  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.894824  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.894987  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.895447  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.895666  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.896475  255382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0122 20:03:11.896537  255382 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0122 20:03:11.896628  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.896674  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.897028  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0122 20:03:11.897581  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.898319  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.898495  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0122 20:03:11.898852  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.898855  255382 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0122 20:03:11.899026  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0122 20:03:11.899055  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.898882  255382 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0122 20:03:11.899237  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.899301  255382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0122 20:03:11.899328  255382 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0122 20:03:11.899874  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.900634  255382 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0122 20:03:11.901448  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.901473  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.901236  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.901697  255382 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0122 20:03:11.902247  255382 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0122 20:03:11.902280  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.902292  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.902414  255382 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0122 20:03:11.902432  255382 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0122 20:03:11.902454  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.902551  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.903282  255382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0122 20:03:11.903333  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.904334  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.904785  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.904836  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.905051  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.905072  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.905265  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.905485  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.905636  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.905842  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.905039  255382 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0122 20:03:11.906108  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0122 20:03:11.906133  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.906541  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45201
	I0122 20:03:11.906719  255382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0122 20:03:11.908148  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.908824  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.908848  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.909264  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I0122 20:03:11.909751  255382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0122 20:03:11.909878  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.910281  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.911407  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.911541  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.911572  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.911984  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.912268  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.912509  255382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0122 20:03:11.914074  255382 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0122 20:03:11.914154  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.914350  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.915185  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.915570  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.915591  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.915599  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.915980  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.916217  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.916393  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.916554  255382 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0122 20:03:11.917147  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.917772  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.917795  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.918061  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46265
	I0122 20:03:11.918315  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.918327  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.918364  255382 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0122 20:03:11.919092  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.919123  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.919096  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.919140  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40655
	I0122 20:03:11.919214  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.919316  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.919969  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.920006  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.920210  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.920342  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.920405  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.920550  255382 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0122 20:03:11.920567  255382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0122 20:03:11.920590  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.920703  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.921171  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.921191  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.921322  255382 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0122 20:03:11.921331  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.921338  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0122 20:03:11.921343  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.921439  255382 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 20:03:11.921748  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.922080  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.922342  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.922822  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.922877  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.923034  255382 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 20:03:11.923053  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 20:03:11.923077  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.923525  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.926878  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.927677  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.927733  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.927771  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38907
	I0122 20:03:11.928053  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.928542  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.928649  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.928701  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.928981  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.929218  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.929239  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.929322  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.929338  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.929382  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.929464  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.929634  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.929732  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.930047  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.930458  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.930450  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.930484  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.930831  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.930980  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:11.931040  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:11.931034  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.931620  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.931842  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.932295  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.932549  255382 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0122 20:03:11.932817  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.934248  255382 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0122 20:03:11.934274  255382 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0122 20:03:11.934306  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.934683  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35043
	I0122 20:03:11.935531  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.935784  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45289
	I0122 20:03:11.936668  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.936699  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.936785  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.937891  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.937918  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.937957  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.938353  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.938674  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.938726  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.938957  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.939476  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.939498  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.940683  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.940732  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.941353  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.941356  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.941559  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.941786  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.943335  255382 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0122 20:03:11.943335  255382 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0122 20:03:11.944925  255382 out.go:177]   - Using image docker.io/registry:2.8.3
	I0122 20:03:11.945000  255382 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0122 20:03:11.945020  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0122 20:03:11.945056  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.946663  255382 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0122 20:03:11.946689  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0122 20:03:11.946725  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.950355  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.950581  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.950708  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.950731  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.952543  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.952551  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0122 20:03:11.952573  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.952543  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0122 20:03:11.952544  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.952665  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.952695  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.953239  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.953249  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.953285  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.953300  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.953700  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.953799  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.953812  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.953817  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.953857  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.953872  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.954264  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.954289  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.954268  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.954450  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.954500  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.956401  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.956745  255382 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 20:03:11.956761  255382 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 20:03:11.956779  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.957437  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.958084  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42597
	I0122 20:03:11.958746  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:11.959537  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:11.959561  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:11.959680  255382 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0122 20:03:11.960082  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:11.960246  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.960300  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:11.960746  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.960794  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.960975  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.961063  255382 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0122 20:03:11.961084  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0122 20:03:11.961110  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.961114  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.961164  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.961332  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.962251  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:11.963994  255382 out.go:177]   - Using image docker.io/busybox:stable
	I0122 20:03:11.964556  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.965046  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.965072  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.965287  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.965521  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.965669  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.965778  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:11.966685  255382 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0122 20:03:11.968008  255382 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0122 20:03:11.968034  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0122 20:03:11.968062  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:11.971274  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.971615  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:11.971633  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:11.971866  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:11.972302  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:11.972561  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:11.972700  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:12.328346  255382 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0122 20:03:12.328378  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0122 20:03:12.335755  255382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 20:03:12.335786  255382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0122 20:03:12.374358  255382 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0122 20:03:12.374400  255382 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0122 20:03:12.453028  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0122 20:03:12.531333  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0122 20:03:12.543243  255382 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0122 20:03:12.543279  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0122 20:03:12.586051  255382 node_ready.go:35] waiting up to 6m0s for node "addons-772234" to be "Ready" ...
	I0122 20:03:12.589836  255382 node_ready.go:49] node "addons-772234" has status "Ready":"True"
	I0122 20:03:12.589873  255382 node_ready.go:38] duration metric: took 3.7754ms for node "addons-772234" to be "Ready" ...
	I0122 20:03:12.589886  255382 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 20:03:12.609304  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 20:03:12.612184  255382 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-l82r5" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:12.614464  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0122 20:03:12.627930  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 20:03:12.628612  255382 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0122 20:03:12.628632  255382 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0122 20:03:12.631611  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0122 20:03:12.636243  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0122 20:03:12.655227  255382 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0122 20:03:12.655260  255382 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0122 20:03:12.682842  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0122 20:03:12.693455  255382 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0122 20:03:12.693487  255382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0122 20:03:12.716158  255382 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0122 20:03:12.716193  255382 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0122 20:03:12.722464  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0122 20:03:12.752785  255382 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0122 20:03:12.752812  255382 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0122 20:03:12.871622  255382 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0122 20:03:12.871660  255382 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0122 20:03:12.875576  255382 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0122 20:03:12.875607  255382 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0122 20:03:12.968354  255382 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0122 20:03:12.968390  255382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0122 20:03:12.987640  255382 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 20:03:12.987671  255382 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0122 20:03:12.991296  255382 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0122 20:03:12.991324  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0122 20:03:13.108778  255382 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0122 20:03:13.108819  255382 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0122 20:03:13.149655  255382 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0122 20:03:13.149689  255382 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0122 20:03:13.239795  255382 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0122 20:03:13.239831  255382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0122 20:03:13.287593  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0122 20:03:13.305748  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 20:03:13.415440  255382 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0122 20:03:13.415472  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0122 20:03:13.575943  255382 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0122 20:03:13.575972  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0122 20:03:13.587781  255382 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0122 20:03:13.587817  255382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0122 20:03:13.870808  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0122 20:03:14.135432  255382 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0122 20:03:14.135469  255382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0122 20:03:14.183717  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0122 20:03:14.632903  255382 pod_ready.go:103] pod "coredns-668d6bf9bc-l82r5" in "kube-system" namespace has status "Ready":"False"
	I0122 20:03:14.756289  255382 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0122 20:03:14.756325  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0122 20:03:15.079897  255382 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0122 20:03:15.079936  255382 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0122 20:03:15.529727  255382 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.193906355s)
	I0122 20:03:15.529774  255382 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0122 20:03:15.591068  255382 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0122 20:03:15.591107  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0122 20:03:16.050785  255382 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-772234" context rescaled to 1 replicas
	I0122 20:03:16.131394  255382 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0122 20:03:16.131423  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0122 20:03:16.575555  255382 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0122 20:03:16.575603  255382 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0122 20:03:16.663644  255382 pod_ready.go:103] pod "coredns-668d6bf9bc-l82r5" in "kube-system" namespace has status "Ready":"False"
	I0122 20:03:17.005006  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0122 20:03:18.795377  255382 pod_ready.go:103] pod "coredns-668d6bf9bc-l82r5" in "kube-system" namespace has status "Ready":"False"
	I0122 20:03:18.796162  255382 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0122 20:03:18.796203  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:18.799916  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:18.800390  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:18.800424  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:18.800613  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:18.800856  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:18.801039  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:18.801185  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:19.433918  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.980827704s)
	I0122 20:03:19.434003  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:19.434020  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:19.434387  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:19.434410  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:19.434423  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:19.434432  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:19.434710  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:19.434731  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:19.434731  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:19.648534  255382 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0122 20:03:20.064126  255382 addons.go:238] Setting addon gcp-auth=true in "addons-772234"
	I0122 20:03:20.064218  255382 host.go:66] Checking if "addons-772234" exists ...
	I0122 20:03:20.064768  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:20.064840  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:20.082167  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37571
	I0122 20:03:20.082798  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:20.083391  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:20.083421  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:20.083923  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:20.084533  255382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:03:20.084591  255382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:03:20.101916  255382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I0122 20:03:20.102568  255382 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:03:20.103239  255382 main.go:141] libmachine: Using API Version  1
	I0122 20:03:20.103291  255382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:03:20.103696  255382 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:03:20.103908  255382 main.go:141] libmachine: (addons-772234) Calling .GetState
	I0122 20:03:20.105837  255382 main.go:141] libmachine: (addons-772234) Calling .DriverName
	I0122 20:03:20.106206  255382 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0122 20:03:20.106247  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHHostname
	I0122 20:03:20.109741  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:20.110458  255382 main.go:141] libmachine: (addons-772234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:89", ip: ""} in network mk-addons-772234: {Iface:virbr1 ExpiryTime:2025-01-22 21:02:41 +0000 UTC Type:0 Mac:52:54:00:37:16:89 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-772234 Clientid:01:52:54:00:37:16:89}
	I0122 20:03:20.110500  255382 main.go:141] libmachine: (addons-772234) DBG | domain addons-772234 has defined IP address 192.168.39.58 and MAC address 52:54:00:37:16:89 in network mk-addons-772234
	I0122 20:03:20.110860  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHPort
	I0122 20:03:20.111223  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHKeyPath
	I0122 20:03:20.111466  255382 main.go:141] libmachine: (addons-772234) Calling .GetSSHUsername
	I0122 20:03:20.111651  255382 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/addons-772234/id_rsa Username:docker}
	I0122 20:03:20.174524  255382 pod_ready.go:93] pod "coredns-668d6bf9bc-l82r5" in "kube-system" namespace has status "Ready":"True"
	I0122 20:03:20.174551  255382 pod_ready.go:82] duration metric: took 7.562331206s for pod "coredns-668d6bf9bc-l82r5" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:20.174564  255382 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-vd24n" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:22.192687  255382 pod_ready.go:103] pod "coredns-668d6bf9bc-vd24n" in "kube-system" namespace has status "Ready":"False"
	I0122 20:03:22.775796  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.244411423s)
	I0122 20:03:22.775884  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.775906  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.775907  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.166550509s)
	I0122 20:03:22.775958  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.775973  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.775975  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.16147067s)
	I0122 20:03:22.776018  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.776036  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.776034  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.148071545s)
	I0122 20:03:22.776068  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.776087  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.776074  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (10.144433434s)
	I0122 20:03:22.776151  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.776152  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.139877685s)
	I0122 20:03:22.776162  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.776175  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.776184  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.776209  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.093334453s)
	I0122 20:03:22.776229  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.776256  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.053761619s)
	I0122 20:03:22.776275  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.776284  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.776359  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.488717259s)
	I0122 20:03:22.776380  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.776389  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.776240  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.776507  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.470728204s)
	I0122 20:03:22.776524  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.776533  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.776631  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.776658  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.776662  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.905809999s)
	I0122 20:03:22.776669  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.776678  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	W0122 20:03:22.776691  255382 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0122 20:03:22.776714  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.776719  255382 retry.go:31] will retry after 306.261738ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0122 20:03:22.776744  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.776751  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.776758  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.776764  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.776776  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.593027908s)
	I0122 20:03:22.776791  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.776846  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.776918  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.776926  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.776936  255382 addons.go:479] Verifying addon ingress=true in "addons-772234"
	I0122 20:03:22.777057  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.777081  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.777093  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.777102  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.777494  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.777538  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.777545  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.777553  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.777560  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.777842  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.777874  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.777881  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.777889  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.777894  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.778213  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.778235  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.778261  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.778268  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.778577  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.778588  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.778596  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.778603  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.779117  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.779178  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.779199  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.779206  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.779218  255382 addons.go:479] Verifying addon metrics-server=true in "addons-772234"
	I0122 20:03:22.779363  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.779373  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.779436  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.779472  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.779499  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.779506  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.779514  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.779523  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.779844  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.779854  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.779864  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.779873  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.780287  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.780317  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.780323  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.782748  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.782813  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.782821  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.783020  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.783074  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.783088  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.777012  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.777039  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.783736  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.783750  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.783750  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.783758  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.783860  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.783869  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.783878  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.783885  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.783899  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.784431  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.784460  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.784490  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.784509  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.784640  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.784657  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.784667  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.784675  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.784775  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.784784  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.784915  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.784932  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.785332  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.785350  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.785362  255382 addons.go:479] Verifying addon registry=true in "addons-772234"
	I0122 20:03:22.785471  255382 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-772234 service yakd-dashboard -n yakd-dashboard
	
	I0122 20:03:22.785451  255382 out.go:177] * Verifying ingress addon...
	I0122 20:03:22.787381  255382 out.go:177] * Verifying registry addon...
	I0122 20:03:22.788813  255382 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0122 20:03:22.789484  255382 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0122 20:03:22.834339  255382 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0122 20:03:22.834381  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:22.837998  255382 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0122 20:03:22.838037  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:22.868665  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.868691  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.869093  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.869124  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:22.869138  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:22.890575  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:22.890615  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:22.890991  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:22.891066  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	W0122 20:03:22.891203  255382 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I0122 20:03:23.083429  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0122 20:03:23.295002  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:23.299932  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:23.824495  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:23.824998  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:24.205535  255382 pod_ready.go:103] pod "coredns-668d6bf9bc-vd24n" in "kube-system" namespace has status "Ready":"False"
	I0122 20:03:24.308712  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:24.312140  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:24.816682  255382 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.710442753s)
	I0122 20:03:24.816886  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.811811976s)
	I0122 20:03:24.816953  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:24.816975  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:24.817289  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:24.817338  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:24.817344  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:24.817352  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:24.817359  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:24.817647  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:24.817711  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:24.817729  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:24.817749  255382 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-772234"
	I0122 20:03:24.818292  255382 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0122 20:03:24.819115  255382 out.go:177] * Verifying csi-hostpath-driver addon...
	I0122 20:03:24.820590  255382 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0122 20:03:24.821525  255382 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0122 20:03:24.821707  255382 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0122 20:03:24.821728  255382 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0122 20:03:24.853887  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:24.854504  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:24.862356  255382 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0122 20:03:24.862403  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:24.873930  255382 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0122 20:03:24.873966  255382 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0122 20:03:25.021060  255382 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0122 20:03:25.021087  255382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0122 20:03:25.105574  255382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0122 20:03:25.294742  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:25.294840  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:25.327856  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:25.536169  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.452673838s)
	I0122 20:03:25.536242  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:25.536268  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:25.536638  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:25.536660  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:25.536677  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:25.536686  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:25.537034  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:25.537077  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:25.537101  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:25.795240  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:25.795488  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:25.826757  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:26.206913  255382 pod_ready.go:103] pod "coredns-668d6bf9bc-vd24n" in "kube-system" namespace has status "Ready":"False"
	I0122 20:03:26.327407  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:26.327738  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:26.364057  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:26.546572  255382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.44094823s)
	I0122 20:03:26.546645  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:26.546664  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:26.547006  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:26.547077  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:26.547099  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:26.547141  255382 main.go:141] libmachine: Making call to close driver server
	I0122 20:03:26.547159  255382 main.go:141] libmachine: (addons-772234) Calling .Close
	I0122 20:03:26.547511  255382 main.go:141] libmachine: Successfully made call to close driver server
	I0122 20:03:26.547555  255382 main.go:141] libmachine: (addons-772234) DBG | Closing plugin on server side
	I0122 20:03:26.547561  255382 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 20:03:26.548899  255382 addons.go:479] Verifying addon gcp-auth=true in "addons-772234"
	I0122 20:03:26.552404  255382 out.go:177] * Verifying gcp-auth addon...
	I0122 20:03:26.554799  255382 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0122 20:03:26.680750  255382 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0122 20:03:26.680790  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:26.804801  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:26.808728  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:26.839294  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:27.061563  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:27.294766  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:27.299830  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:27.330041  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:27.561984  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:27.795098  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:27.795693  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:27.826645  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:28.059383  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:28.298450  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:28.298935  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:28.326916  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:28.559149  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:28.681360  255382 pod_ready.go:103] pod "coredns-668d6bf9bc-vd24n" in "kube-system" namespace has status "Ready":"False"
	I0122 20:03:28.793709  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:28.793917  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:28.827147  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:29.059302  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:29.294327  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:29.294745  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:29.328016  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:29.558974  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:29.793749  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:29.794542  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:29.828358  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:30.059415  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:30.294434  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:30.294474  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:30.395140  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:30.559197  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:30.683286  255382 pod_ready.go:103] pod "coredns-668d6bf9bc-vd24n" in "kube-system" namespace has status "Ready":"False"
	I0122 20:03:30.794577  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:30.794886  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:30.828714  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:31.059238  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:31.293862  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:31.295747  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:31.327328  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:31.559966  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:31.793737  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:31.794593  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:31.828590  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:32.060580  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:32.471421  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:32.471877  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:32.472004  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:32.574600  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:32.685056  255382 pod_ready.go:98] pod "coredns-668d6bf9bc-vd24n" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-22 20:03:32 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-22 20:03:12 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-22 20:03:12 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-22 20:03:12 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-22 20:03:12 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.58 HostIPs:[{IP:192.168.39.
58}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-22 20:03:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-22 20:03:19 +0000 UTC,FinishedAt:2025-01-22 20:03:29 +0000 UTC,ContainerID:cri-o://034496a0c96b94207315b7a761f98bb03aebf210d1e209b4151020175401246d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://034496a0c96b94207315b7a761f98bb03aebf210d1e209b4151020175401246d Started:0xc0027ed500 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002683b00} {Name:kube-api-access-2b55q MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002683b10}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0122 20:03:32.685111  255382 pod_ready.go:82] duration metric: took 12.510540101s for pod "coredns-668d6bf9bc-vd24n" in "kube-system" namespace to be "Ready" ...
	E0122 20:03:32.685127  255382 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-vd24n" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-22 20:03:32 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-22 20:03:12 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-22 20:03:12 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-22 20:03:12 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-22 20:03:12 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.58 HostIPs:[{IP:192.168.39.58}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-22 20:03:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-22 20:03:19 +0000 UTC,FinishedAt:2025-01-22 20:03:29 +0000 UTC,ContainerID:cri-o://034496a0c96b94207315b7a761f98bb03aebf210d1e209b4151020175401246d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://034496a0c96b94207315b7a761f98bb03aebf210d1e209b4151020175401246d Started:0xc0027ed500 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002683b00} {Name:kube-api-access-2b55q MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc002683b10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0122 20:03:32.685138  255382 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-772234" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:32.695640  255382 pod_ready.go:93] pod "etcd-addons-772234" in "kube-system" namespace has status "Ready":"True"
	I0122 20:03:32.695673  255382 pod_ready.go:82] duration metric: took 10.523302ms for pod "etcd-addons-772234" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:32.695691  255382 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-772234" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:32.701669  255382 pod_ready.go:93] pod "kube-apiserver-addons-772234" in "kube-system" namespace has status "Ready":"True"
	I0122 20:03:32.701704  255382 pod_ready.go:82] duration metric: took 6.003382ms for pod "kube-apiserver-addons-772234" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:32.701720  255382 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-772234" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:32.710120  255382 pod_ready.go:93] pod "kube-controller-manager-addons-772234" in "kube-system" namespace has status "Ready":"True"
	I0122 20:03:32.710159  255382 pod_ready.go:82] duration metric: took 8.429777ms for pod "kube-controller-manager-addons-772234" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:32.710172  255382 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z5sqk" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:32.718351  255382 pod_ready.go:93] pod "kube-proxy-z5sqk" in "kube-system" namespace has status "Ready":"True"
	I0122 20:03:32.718380  255382 pod_ready.go:82] duration metric: took 8.200872ms for pod "kube-proxy-z5sqk" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:32.718393  255382 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-772234" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:32.795500  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:32.795767  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:32.826738  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:33.060727  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:33.080548  255382 pod_ready.go:93] pod "kube-scheduler-addons-772234" in "kube-system" namespace has status "Ready":"True"
	I0122 20:03:33.080579  255382 pod_ready.go:82] duration metric: took 362.178678ms for pod "kube-scheduler-addons-772234" in "kube-system" namespace to be "Ready" ...
	I0122 20:03:33.080589  255382 pod_ready.go:39] duration metric: took 20.490690829s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 20:03:33.080608  255382 api_server.go:52] waiting for apiserver process to appear ...
	I0122 20:03:33.080667  255382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 20:03:33.101169  255382 api_server.go:72] duration metric: took 21.330551169s to wait for apiserver process to appear ...
	I0122 20:03:33.101202  255382 api_server.go:88] waiting for apiserver healthz status ...
	I0122 20:03:33.101226  255382 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0122 20:03:33.106797  255382 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0122 20:03:33.108012  255382 api_server.go:141] control plane version: v1.32.1
	I0122 20:03:33.108046  255382 api_server.go:131] duration metric: took 6.836246ms to wait for apiserver health ...
	I0122 20:03:33.108066  255382 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 20:03:33.292301  255382 system_pods.go:59] 19 kube-system pods found
	I0122 20:03:33.292375  255382 system_pods.go:61] "amd-gpu-device-plugin-m4f7k" [89feff56-65d3-453c-aec2-2b913700601f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0122 20:03:33.292388  255382 system_pods.go:61] "coredns-668d6bf9bc-l82r5" [812962f6-19c9-455f-b6ac-95c739ebbc05] Running
	I0122 20:03:33.292400  255382 system_pods.go:61] "coredns-668d6bf9bc-vd24n" [e0bcf903-ee5d-4d0f-a3df-751a36d4f3d7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0122 20:03:33.292411  255382 system_pods.go:61] "csi-hostpath-attacher-0" [93a42413-3ef1-49d2-a0df-62b0e9a319de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0122 20:03:33.292421  255382 system_pods.go:61] "csi-hostpath-resizer-0" [5546f4b8-d18e-4914-8335-208c5695ecaa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0122 20:03:33.292436  255382 system_pods.go:61] "csi-hostpathplugin-zt69b" [bbf8a63e-71be-4a91-953f-d82996dad359] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0122 20:03:33.292445  255382 system_pods.go:61] "etcd-addons-772234" [3556bef2-1c45-4c3d-a539-e860296c56c5] Running
	I0122 20:03:33.292456  255382 system_pods.go:61] "kube-apiserver-addons-772234" [5cc6fb53-6029-4704-ad1a-34bade6b6fd9] Running
	I0122 20:03:33.292466  255382 system_pods.go:61] "kube-controller-manager-addons-772234" [8f2e47c0-63db-4b82-87eb-a30fc9e83310] Running
	I0122 20:03:33.292479  255382 system_pods.go:61] "kube-ingress-dns-minikube" [45e91eb0-f8cb-4691-8180-3854ae9f05e7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0122 20:03:33.292489  255382 system_pods.go:61] "kube-proxy-z5sqk" [b6e83878-3ae5-4c34-be45-cd9133d33398] Running
	I0122 20:03:33.292499  255382 system_pods.go:61] "kube-scheduler-addons-772234" [94f3e000-fa2d-4e85-8754-2fb9c0e9e4b1] Running
	I0122 20:03:33.292511  255382 system_pods.go:61] "metrics-server-7fbb699795-qnc8h" [e2d700be-750b-4d4d-a086-8d4000faa1e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 20:03:33.292527  255382 system_pods.go:61] "nvidia-device-plugin-daemonset-28lq2" [4d14b2d9-bcc1-4a92-9453-8af3817ffa52] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0122 20:03:33.292539  255382 system_pods.go:61] "registry-6c88467877-zjk8j" [915fd237-ebbe-434c-adc5-f3abec60767f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0122 20:03:33.292552  255382 system_pods.go:61] "registry-proxy-zwvcf" [8a6211f0-8029-4ac7-9a77-513808839094] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0122 20:03:33.292569  255382 system_pods.go:61] "snapshot-controller-68b874b76f-2ds69" [2e888ff0-0853-4f35-85af-9483d8e8996b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0122 20:03:33.292583  255382 system_pods.go:61] "snapshot-controller-68b874b76f-htb96" [ac3cf146-3878-4a7b-be98-c7589eb53409] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0122 20:03:33.292593  255382 system_pods.go:61] "storage-provisioner" [ba0dc1fc-a288-4472-9efd-a495438aaf68] Running
	I0122 20:03:33.292607  255382 system_pods.go:74] duration metric: took 184.532833ms to wait for pod list to return data ...
	I0122 20:03:33.292628  255382 default_sa.go:34] waiting for default service account to be created ...
	I0122 20:03:33.294445  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:33.294844  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:33.326808  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:33.482726  255382 default_sa.go:45] found service account: "default"
	I0122 20:03:33.482760  255382 default_sa.go:55] duration metric: took 190.119712ms for default service account to be created ...
	I0122 20:03:33.482775  255382 system_pods.go:137] waiting for k8s-apps to be running ...
	I0122 20:03:33.559373  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:33.688586  255382 system_pods.go:87] 18 kube-system pods found
	I0122 20:03:33.794266  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:33.795313  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:33.828283  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:33.880250  255382 system_pods.go:105] "amd-gpu-device-plugin-m4f7k" [89feff56-65d3-453c-aec2-2b913700601f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0122 20:03:33.880273  255382 system_pods.go:105] "coredns-668d6bf9bc-l82r5" [812962f6-19c9-455f-b6ac-95c739ebbc05] Running
	I0122 20:03:33.880284  255382 system_pods.go:105] "csi-hostpath-attacher-0" [93a42413-3ef1-49d2-a0df-62b0e9a319de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0122 20:03:33.880294  255382 system_pods.go:105] "csi-hostpath-resizer-0" [5546f4b8-d18e-4914-8335-208c5695ecaa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0122 20:03:33.880305  255382 system_pods.go:105] "csi-hostpathplugin-zt69b" [bbf8a63e-71be-4a91-953f-d82996dad359] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0122 20:03:33.880314  255382 system_pods.go:105] "etcd-addons-772234" [3556bef2-1c45-4c3d-a539-e860296c56c5] Running
	I0122 20:03:33.880319  255382 system_pods.go:105] "kube-apiserver-addons-772234" [5cc6fb53-6029-4704-ad1a-34bade6b6fd9] Running
	I0122 20:03:33.880324  255382 system_pods.go:105] "kube-controller-manager-addons-772234" [8f2e47c0-63db-4b82-87eb-a30fc9e83310] Running
	I0122 20:03:33.880330  255382 system_pods.go:105] "kube-ingress-dns-minikube" [45e91eb0-f8cb-4691-8180-3854ae9f05e7] Running
	I0122 20:03:33.880335  255382 system_pods.go:105] "kube-proxy-z5sqk" [b6e83878-3ae5-4c34-be45-cd9133d33398] Running
	I0122 20:03:33.880341  255382 system_pods.go:105] "kube-scheduler-addons-772234" [94f3e000-fa2d-4e85-8754-2fb9c0e9e4b1] Running
	I0122 20:03:33.880351  255382 system_pods.go:105] "metrics-server-7fbb699795-qnc8h" [e2d700be-750b-4d4d-a086-8d4000faa1e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 20:03:33.880362  255382 system_pods.go:105] "nvidia-device-plugin-daemonset-28lq2" [4d14b2d9-bcc1-4a92-9453-8af3817ffa52] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0122 20:03:33.880372  255382 system_pods.go:105] "registry-6c88467877-zjk8j" [915fd237-ebbe-434c-adc5-f3abec60767f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0122 20:03:33.880382  255382 system_pods.go:105] "registry-proxy-zwvcf" [8a6211f0-8029-4ac7-9a77-513808839094] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0122 20:03:33.880392  255382 system_pods.go:105] "snapshot-controller-68b874b76f-2ds69" [2e888ff0-0853-4f35-85af-9483d8e8996b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0122 20:03:33.880402  255382 system_pods.go:105] "snapshot-controller-68b874b76f-htb96" [ac3cf146-3878-4a7b-be98-c7589eb53409] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0122 20:03:33.880415  255382 system_pods.go:105] "storage-provisioner" [ba0dc1fc-a288-4472-9efd-a495438aaf68] Running
	I0122 20:03:33.880429  255382 system_pods.go:147] duration metric: took 397.64503ms to wait for k8s-apps to be running ...
	I0122 20:03:33.880445  255382 system_svc.go:44] waiting for kubelet service to be running ....
	I0122 20:03:33.880519  255382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 20:03:33.930528  255382 system_svc.go:56] duration metric: took 50.072176ms WaitForService to wait for kubelet
	I0122 20:03:33.930564  255382 kubeadm.go:582] duration metric: took 22.159952852s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 20:03:33.930592  255382 node_conditions.go:102] verifying NodePressure condition ...
	I0122 20:03:34.060007  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:34.079551  255382 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 20:03:34.079588  255382 node_conditions.go:123] node cpu capacity is 2
	I0122 20:03:34.079606  255382 node_conditions.go:105] duration metric: took 149.008303ms to run NodePressure ...
	I0122 20:03:34.079624  255382 start.go:241] waiting for startup goroutines ...
	I0122 20:03:34.294277  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:34.294975  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:34.327437  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:34.558977  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:34.795377  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:34.795521  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:34.826676  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:35.057943  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:35.294419  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:35.295005  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:35.327140  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:35.566907  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:35.794143  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:35.794321  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:35.826689  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:36.059327  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:36.293834  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:36.295050  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:36.326684  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:36.559237  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:36.793567  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:36.794615  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:36.828192  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:37.058859  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:37.294693  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:37.295366  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:37.327547  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:37.559538  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:37.795691  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:37.797418  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:37.827225  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:38.058361  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:38.293689  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:38.294454  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:38.327344  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:38.995480  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:38.996212  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:38.996262  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:38.996531  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:39.090497  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:39.294215  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:39.294364  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:39.327767  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:39.559093  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:39.795137  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:39.795459  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:39.827623  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:40.059075  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:40.293553  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:40.294406  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:40.327719  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:40.558362  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:40.793920  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:40.794380  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:40.826729  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:41.100838  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:41.294710  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:41.295162  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:41.326866  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:41.559036  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:41.794920  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:41.795865  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:41.826381  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:42.058774  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:42.294430  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:42.294609  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:42.327015  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:42.559489  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:42.794254  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:42.794650  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:42.826042  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:43.059037  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:43.294042  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:43.294243  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:43.331933  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:43.558727  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:43.796770  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:43.797274  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:43.828115  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:44.059272  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:44.299556  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:44.301721  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:44.328055  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:44.560156  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:44.794790  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:44.795159  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:44.827331  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:45.058246  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:45.294799  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:45.295159  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:45.326037  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:45.559279  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:45.796155  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:45.796446  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:45.826116  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:46.058884  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:46.295496  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:46.296085  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:46.327203  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:46.560494  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:46.793412  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:46.794120  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:46.827512  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:47.059789  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:47.294676  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:47.295203  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:47.395683  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:47.559228  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:47.794857  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:47.795774  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:47.827263  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:48.059671  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:48.294840  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:48.295488  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:48.326931  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:48.559534  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:48.794126  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:48.794816  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:48.826608  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:49.059558  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:49.294537  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:49.295013  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:49.395462  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:49.559632  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:49.794109  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:49.795162  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:49.827778  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:50.058663  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:50.294655  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:50.295026  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:50.327229  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:50.558966  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:50.796276  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:50.796881  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:50.826812  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:51.058508  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:51.294649  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:51.294831  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:51.395186  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:51.559415  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:51.802472  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:51.802619  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:51.829453  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:52.059545  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:52.293977  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:52.294227  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:52.325726  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:52.558639  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:52.795163  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:52.795807  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:52.827197  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:53.058565  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:53.293921  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:53.294708  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:53.326632  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:53.559296  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:53.793922  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:53.794249  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:53.827109  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:54.058754  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:54.295539  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:54.296047  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:54.330249  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:54.559389  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:54.795386  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:54.795828  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:54.827041  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:55.059021  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:55.295952  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:55.296273  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:55.325812  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:55.558997  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:55.793595  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:55.794058  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:55.826339  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:56.059128  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:56.295230  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:56.295772  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:56.327119  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:56.558768  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:56.794328  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:56.795442  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:56.827622  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:57.059381  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:57.295857  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:57.297170  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:57.395847  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:57.559197  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:57.793496  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:57.794860  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:57.827504  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:58.061675  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:58.294788  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:58.295793  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:58.327468  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:58.559630  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:58.794268  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:58.794774  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:58.831444  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:59.059745  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:59.295448  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:59.297939  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:59.327136  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:03:59.560530  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:03:59.794055  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:03:59.794309  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:03:59.828851  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:00.059848  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:00.295692  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:04:00.296059  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:00.326798  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:00.558929  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:01.119683  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:01.120988  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:01.121485  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:01.122711  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:04:01.294495  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:04:01.295029  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:01.326421  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:01.562092  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:01.794120  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 20:04:01.794210  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:01.827048  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:02.059332  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:02.294448  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:02.294566  255382 kapi.go:107] duration metric: took 39.505077765s to wait for kubernetes.io/minikube-addons=registry ...
	I0122 20:04:02.326512  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:02.558544  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:02.793882  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:02.826566  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:03.059358  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:03.294216  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:03.327725  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:03.558483  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:03.793987  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:03.826452  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:04.059610  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:04.300600  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:04.328771  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:04.558498  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:04.794110  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:04.828485  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:05.058984  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:05.299354  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:05.328910  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:05.574282  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:05.793880  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:05.826812  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:06.059578  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:06.294456  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:06.326276  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:06.559018  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:06.794097  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:06.826403  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:07.059933  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:07.311303  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:07.331970  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:07.562599  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:07.810398  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:07.835797  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:08.059120  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:08.293751  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:08.394966  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:08.558659  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:08.794806  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:08.826518  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:09.059574  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:09.293945  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:09.326913  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:09.559486  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:09.794760  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:09.826570  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:10.059057  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:10.293126  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:10.326942  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:10.559305  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:10.793828  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:10.827659  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:11.059295  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:11.293720  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:11.326694  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:11.560232  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:11.793543  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:11.826557  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:12.058961  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:12.297405  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:12.327001  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:12.558632  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:12.793447  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:12.826565  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:13.059467  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:13.293992  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:13.327049  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:13.559229  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:13.793107  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:13.826760  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:14.058488  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:14.293929  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:14.330482  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:14.559423  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:14.794225  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:14.827444  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:15.061254  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:15.293995  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:15.327930  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:15.559199  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:15.793272  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:15.827090  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:16.058753  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:16.294710  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:16.327604  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:16.559660  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:16.795000  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:16.826541  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:17.059380  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:17.294096  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:17.327645  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:17.559471  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:17.793990  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:17.826647  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:18.059644  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:18.293940  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:18.327298  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:18.559171  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:18.793324  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:18.826713  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:19.059229  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:19.293539  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:19.326946  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:19.574227  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:19.793796  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:19.827194  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:20.059143  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:20.293593  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:20.326864  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:20.561138  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:20.794489  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:20.827104  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:21.059150  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:21.301207  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:21.327716  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:21.564876  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:21.794157  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:21.826870  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:22.059124  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:22.293623  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:22.327407  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:22.559517  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:22.794787  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:22.827396  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:23.060775  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:23.309664  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:23.331556  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:23.568462  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:23.800215  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:23.830694  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:24.059031  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:24.294452  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:24.326567  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:24.566674  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:24.793711  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:24.827453  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:25.058908  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:25.295734  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:25.326776  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:25.569101  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:25.795385  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:25.828060  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:26.059482  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:26.293592  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:26.326650  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:26.558903  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:26.794299  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:26.827378  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:27.069507  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:27.294608  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:27.327762  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:27.580670  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:27.794425  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:27.826449  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:28.065130  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:28.294469  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:28.327293  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:28.558863  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:28.794027  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:28.827370  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:29.059449  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:29.294313  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:29.328388  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:29.559994  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:29.794198  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:29.828046  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:30.059024  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:30.293451  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:30.325961  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:30.559237  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:30.794605  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:30.895370  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:31.062389  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:31.298560  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:31.332957  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:31.561426  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:31.794467  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:31.827493  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:32.059485  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:32.304167  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:32.396920  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:32.561129  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:32.793518  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:32.826700  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:33.059085  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:33.297343  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:33.327375  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:33.558747  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:33.795506  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:33.827673  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:34.060205  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:34.293432  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:34.328875  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:34.563062  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:34.795055  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:34.829263  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:35.059014  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:35.294344  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:35.327187  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:35.809786  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:35.819277  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:35.849216  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:36.058967  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:36.295505  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:36.326459  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:36.559658  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:36.794636  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:36.827108  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:37.058587  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:37.295035  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:37.327245  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:37.558697  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:37.794219  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:37.828056  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:38.098211  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:38.295503  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:38.328177  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:38.560517  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:38.794032  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:38.829086  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:39.059507  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:39.301403  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:39.327429  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:39.563509  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:39.794054  255382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 20:04:39.827184  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:40.061433  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:40.295080  255382 kapi.go:107] duration metric: took 1m17.506259335s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0122 20:04:40.326895  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:40.561183  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:40.828903  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:41.059484  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:41.327076  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:41.558690  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:41.826577  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:42.059471  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:42.328435  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:42.562332  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:42.833469  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:43.060190  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:43.327015  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:43.563352  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:43.827322  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:44.062469  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:44.329174  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:44.559189  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:44.826348  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:45.059235  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:45.327255  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:45.559550  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:45.826982  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:46.058961  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:46.327472  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:46.559354  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:46.828972  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:47.059489  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:47.326998  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 20:04:47.559857  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:47.827427  255382 kapi.go:107] duration metric: took 1m23.005893383s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0122 20:04:48.058812  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:48.559694  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:49.060443  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:49.559681  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:50.059246  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:50.560011  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:51.059520  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:51.559252  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:52.059596  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:52.560596  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:53.059972  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:53.558346  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:54.059107  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:54.560592  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:55.059524  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:55.559437  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:56.059245  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:56.559607  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:57.059148  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:57.559879  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:58.060152  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:58.559981  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:59.060142  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:04:59.559509  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:00.059545  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:00.560279  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:01.059550  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:01.558879  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:02.059832  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:02.559350  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:03.059961  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:03.559321  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:04.059633  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:04.560178  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:05.062496  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:05.559736  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:06.059480  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:06.559906  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:07.058722  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:07.559988  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:08.059381  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:08.559331  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:09.059503  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:09.560127  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:10.059978  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:10.559726  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:11.059877  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:11.559431  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:12.058573  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:12.560496  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:13.059786  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:13.559371  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:14.059988  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:14.562702  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:15.059406  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:15.559917  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:16.060055  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:16.560251  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:17.058893  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:17.559198  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:18.058975  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:18.560219  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:19.059249  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:19.559239  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:20.060019  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:20.560936  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:21.060095  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:21.560319  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:22.060166  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:22.560048  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:23.060506  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:23.560148  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:24.058635  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:24.559724  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:25.059431  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:25.559712  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:26.059718  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:26.560217  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:27.059834  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:27.558782  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:28.059497  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:28.560553  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:29.060145  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:29.560244  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:30.059402  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:30.559571  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:31.059475  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:31.558584  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:32.058894  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:32.560148  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:33.059912  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:33.560228  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:34.059021  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:34.559562  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:35.059931  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:35.559597  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:36.060931  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:36.559683  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:37.060023  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:37.560168  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:38.059144  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:38.559937  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:39.059725  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:39.560252  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:40.059302  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:40.559740  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:41.060319  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:41.559083  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:42.058938  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:42.559399  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:43.060120  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:43.558658  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:44.059846  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:44.559362  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:45.059603  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:45.559776  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:46.059951  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:46.562304  255382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 20:05:47.059889  255382 kapi.go:107] duration metric: took 2m20.505087128s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0122 20:05:47.061594  255382 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-772234 cluster.
	I0122 20:05:47.063069  255382 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0122 20:05:47.064548  255382 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0122 20:05:47.065938  255382 out.go:177] * Enabled addons: inspektor-gadget, metrics-server, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0122 20:05:47.067181  255382 addons.go:514] duration metric: took 2m35.296818737s for enable addons: enabled=[inspektor-gadget metrics-server amd-gpu-device-plugin storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0122 20:05:47.067237  255382 start.go:246] waiting for cluster config update ...
	I0122 20:05:47.067273  255382 start.go:255] writing updated cluster config ...
	I0122 20:05:47.067625  255382 ssh_runner.go:195] Run: rm -f paused
	I0122 20:05:47.131864  255382 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0122 20:05:47.133766  255382 out.go:177] * Done! kubectl is now configured to use "addons-772234" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.764393522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ef8efda-0552-4f79-84e9-e2029aea0991 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.764795893Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:378ad966fe8c6d98731272293731d99d06164243ae8649c1da8c375568dcb33e,PodSandboxId:503f6bf30025377ab4c31e6b2662d859179ac56feef5aa50cd4df894908e321e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737576394202107765,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b0101f5-4f0f-44ea-af44-62a0d91ae084,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e58d33575b5148d7188fa4c4c7df9e67b63347d6564dd6fbca0ada1da4847521,PodSandboxId:3414637a9fd676cee0a79820c655119a160f876e8b4cd791cc1ad59fc7d85589,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737576353046131722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2a71b7-8b58-4680-a61c-accbf7d6a820,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cfdbe0d4397cf8839fb79b5dc0b33bf6ad7d9476326d98976441e02a70a57b,PodSandboxId:cfddd391bb5e2130e9a60cbb5c5d52c0e62210914307963d6e132450a4f3bdbe,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737576279030202206,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-qvz4c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 908057ab-3e57-464a-ac34-aa4153e708a7,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b21719f3fe46b613ec99e633ccbcb1d2f6c846b05f98a22e1d012bdf3143564f,PodSandboxId:3a2db65d5ea8f81fb12e257857e993957b28e8bbe69977241b981618bc0f1c82,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1737576264556823184,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f9tx6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fea4bbf-eed0-4a68-85e1-39bc026096d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c5e8caeb4bce076f9a340405ebf72668dff2c26719e143017c6738ee10584c,PodSandboxId:674d49c2819c61da6a4cd190af0b36570ea09c379562e5ab7ea7382ec9262cd4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737576263404156191,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vljjt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 400bb0ce-5261-4589-8ed1-8f46f602ca28,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e9eb6a0e43d05ce9ab12402ae8ed6aea429eb7b635c124166d3d51fcf3ea21,PodSandboxId:5975bdb11f892d7b55773ed6181630e7a222575c7e650bda090479e2033d9c01,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737576229026408072,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-m4f7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89feff56-65d3-453c-aec2-2b913700601f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d96ec4eb1e5cdb491de90ce68342ac23de8042699b5238f65f4583e9dc6e6e,PodSandboxId:179aac9d5c9fbd3a563d8a6fdaf5b99b0657efb5542b78e9b1422aa7d1d845a6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737576212626034849,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e91eb0-f8cb-4691-8180-3854ae9f05e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1710a94d8f75242581c7927de0b8cfed2729fbcb5c62ee6b435951fc530e9e,PodSandboxId:cf17b8ff4efc1759f28a6c0318b65829c688107e12c2e328056695da55d64c14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737576200765037430,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba0dc1fc-a288-4472-9efd-a495438aaf68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f074dd12b01f6c2d7be32fc9b4cc15a7e09633ef41250303f9bb4b741fde0569,PodSandboxId:1a9e41785bcd3553e558a821f95ebcf358458c51cfecb73a3455570dc9d8a60b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737576197630233886,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l82r5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812962f6-19c9-455f-b6ac-95c739ebbc05,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ac16747769dfdc0bb27e445ca792c23eae693dd9bff634d25813
8a0f08ee948,PodSandboxId:ea4e05eae14acc1ef3a8e29b8f49bfa235f00f576a8b8423fa1d0d45c6bd54df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737576194550938852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5sqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6e83878-3ae5-4c34-be45-cd9133d33398,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27b625d59f0a3dc6f7a1ba016eda0fe92b5e10d07784039d3318472292de2da,PodSandboxId:dc57dac1
80ba7489f002f85ef995d237cfd5083ddb1156e7baee588e5c84c9c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737576180515292863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752c1168244c4a6b364d806da2f9e17e,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23cc96ca2ccb481a5cebf5b89d989cb39c43caf6970294fdd1f71ae142c3c366,PodSandboxId:50ef6552fe7e0dd05a0f7ade9
5b73c0ab24d9bbbd6eef9a06418c62cfcc16b91,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737576180538286820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f580f6c3618cbf92b0d9c85c8224c689,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5773766167b6364ccaf40cf36c210fbddf6d30e38761dd913268a6f7fb1fc4,PodSandboxId:83635356ad855dabb6e00b34c0a5d77ec683c846981e0ab661bef9773609b5e0,Metadat
a:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737576180523878120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659d9439c2bea1b6cc18aef8f08e67d9,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9341c92c736b093a1a45a4cd9bdad6e54063edd8de69cef5367eec277041998,PodSandboxId:79dd2d723d06aa99965bc4f4530e9db619fdc49d14f2792ca0eb19dccb0790d
d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737576180511966112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1cb68ad7fccfeed3db9f82c4dbb357,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ef8efda-0552-4f79-84e9-e2029aea0991 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.811057147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e91f5dd-354f-418e-8b0f-8e9fa811aafa name=/runtime.v1.RuntimeService/Version
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.811152296Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e91f5dd-354f-418e-8b0f-8e9fa811aafa name=/runtime.v1.RuntimeService/Version
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.812834535Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00759a13-ad92-461d-b9d4-66aef5d191e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.814161520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737576532814126550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00759a13-ad92-461d-b9d4-66aef5d191e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.815093982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=218ce97f-ecb4-4c71-a1a4-2e77c2a5fa63 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.815184075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=218ce97f-ecb4-4c71-a1a4-2e77c2a5fa63 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.815644549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:378ad966fe8c6d98731272293731d99d06164243ae8649c1da8c375568dcb33e,PodSandboxId:503f6bf30025377ab4c31e6b2662d859179ac56feef5aa50cd4df894908e321e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737576394202107765,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b0101f5-4f0f-44ea-af44-62a0d91ae084,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e58d33575b5148d7188fa4c4c7df9e67b63347d6564dd6fbca0ada1da4847521,PodSandboxId:3414637a9fd676cee0a79820c655119a160f876e8b4cd791cc1ad59fc7d85589,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737576353046131722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2a71b7-8b58-4680-a61c-accbf7d6a820,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cfdbe0d4397cf8839fb79b5dc0b33bf6ad7d9476326d98976441e02a70a57b,PodSandboxId:cfddd391bb5e2130e9a60cbb5c5d52c0e62210914307963d6e132450a4f3bdbe,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737576279030202206,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-qvz4c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 908057ab-3e57-464a-ac34-aa4153e708a7,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b21719f3fe46b613ec99e633ccbcb1d2f6c846b05f98a22e1d012bdf3143564f,PodSandboxId:3a2db65d5ea8f81fb12e257857e993957b28e8bbe69977241b981618bc0f1c82,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1737576264556823184,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f9tx6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fea4bbf-eed0-4a68-85e1-39bc026096d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c5e8caeb4bce076f9a340405ebf72668dff2c26719e143017c6738ee10584c,PodSandboxId:674d49c2819c61da6a4cd190af0b36570ea09c379562e5ab7ea7382ec9262cd4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737576263404156191,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vljjt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 400bb0ce-5261-4589-8ed1-8f46f602ca28,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e9eb6a0e43d05ce9ab12402ae8ed6aea429eb7b635c124166d3d51fcf3ea21,PodSandboxId:5975bdb11f892d7b55773ed6181630e7a222575c7e650bda090479e2033d9c01,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737576229026408072,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-m4f7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89feff56-65d3-453c-aec2-2b913700601f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d96ec4eb1e5cdb491de90ce68342ac23de8042699b5238f65f4583e9dc6e6e,PodSandboxId:179aac9d5c9fbd3a563d8a6fdaf5b99b0657efb5542b78e9b1422aa7d1d845a6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737576212626034849,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e91eb0-f8cb-4691-8180-3854ae9f05e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1710a94d8f75242581c7927de0b8cfed2729fbcb5c62ee6b435951fc530e9e,PodSandboxId:cf17b8ff4efc1759f28a6c0318b65829c688107e12c2e328056695da55d64c14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737576200765037430,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba0dc1fc-a288-4472-9efd-a495438aaf68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f074dd12b01f6c2d7be32fc9b4cc15a7e09633ef41250303f9bb4b741fde0569,PodSandboxId:1a9e41785bcd3553e558a821f95ebcf358458c51cfecb73a3455570dc9d8a60b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737576197630233886,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l82r5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812962f6-19c9-455f-b6ac-95c739ebbc05,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ac16747769dfdc0bb27e445ca792c23eae693dd9bff634d25813
8a0f08ee948,PodSandboxId:ea4e05eae14acc1ef3a8e29b8f49bfa235f00f576a8b8423fa1d0d45c6bd54df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737576194550938852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5sqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6e83878-3ae5-4c34-be45-cd9133d33398,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27b625d59f0a3dc6f7a1ba016eda0fe92b5e10d07784039d3318472292de2da,PodSandboxId:dc57dac1
80ba7489f002f85ef995d237cfd5083ddb1156e7baee588e5c84c9c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737576180515292863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752c1168244c4a6b364d806da2f9e17e,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23cc96ca2ccb481a5cebf5b89d989cb39c43caf6970294fdd1f71ae142c3c366,PodSandboxId:50ef6552fe7e0dd05a0f7ade9
5b73c0ab24d9bbbd6eef9a06418c62cfcc16b91,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737576180538286820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f580f6c3618cbf92b0d9c85c8224c689,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5773766167b6364ccaf40cf36c210fbddf6d30e38761dd913268a6f7fb1fc4,PodSandboxId:83635356ad855dabb6e00b34c0a5d77ec683c846981e0ab661bef9773609b5e0,Metadat
a:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737576180523878120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659d9439c2bea1b6cc18aef8f08e67d9,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9341c92c736b093a1a45a4cd9bdad6e54063edd8de69cef5367eec277041998,PodSandboxId:79dd2d723d06aa99965bc4f4530e9db619fdc49d14f2792ca0eb19dccb0790d
d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737576180511966112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1cb68ad7fccfeed3db9f82c4dbb357,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=218ce97f-ecb4-4c71-a1a4-2e77c2a5fa63 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.861639207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=405923a9-bd58-4e5c-9624-3a0d455a01d9 name=/runtime.v1.RuntimeService/Version
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.861729253Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=405923a9-bd58-4e5c-9624-3a0d455a01d9 name=/runtime.v1.RuntimeService/Version
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.863165353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1e42591-6dc4-47af-83b8-7c3a578d304c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.864598804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737576532864556665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1e42591-6dc4-47af-83b8-7c3a578d304c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.865784643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d075be7-01a9-4e68-9292-3520c28c7ea9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.865890516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d075be7-01a9-4e68-9292-3520c28c7ea9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.866198420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:378ad966fe8c6d98731272293731d99d06164243ae8649c1da8c375568dcb33e,PodSandboxId:503f6bf30025377ab4c31e6b2662d859179ac56feef5aa50cd4df894908e321e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737576394202107765,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b0101f5-4f0f-44ea-af44-62a0d91ae084,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e58d33575b5148d7188fa4c4c7df9e67b63347d6564dd6fbca0ada1da4847521,PodSandboxId:3414637a9fd676cee0a79820c655119a160f876e8b4cd791cc1ad59fc7d85589,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737576353046131722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2a71b7-8b58-4680-a61c-accbf7d6a820,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cfdbe0d4397cf8839fb79b5dc0b33bf6ad7d9476326d98976441e02a70a57b,PodSandboxId:cfddd391bb5e2130e9a60cbb5c5d52c0e62210914307963d6e132450a4f3bdbe,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737576279030202206,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-qvz4c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 908057ab-3e57-464a-ac34-aa4153e708a7,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b21719f3fe46b613ec99e633ccbcb1d2f6c846b05f98a22e1d012bdf3143564f,PodSandboxId:3a2db65d5ea8f81fb12e257857e993957b28e8bbe69977241b981618bc0f1c82,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1737576264556823184,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f9tx6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fea4bbf-eed0-4a68-85e1-39bc026096d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c5e8caeb4bce076f9a340405ebf72668dff2c26719e143017c6738ee10584c,PodSandboxId:674d49c2819c61da6a4cd190af0b36570ea09c379562e5ab7ea7382ec9262cd4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737576263404156191,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vljjt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 400bb0ce-5261-4589-8ed1-8f46f602ca28,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e9eb6a0e43d05ce9ab12402ae8ed6aea429eb7b635c124166d3d51fcf3ea21,PodSandboxId:5975bdb11f892d7b55773ed6181630e7a222575c7e650bda090479e2033d9c01,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737576229026408072,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-m4f7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89feff56-65d3-453c-aec2-2b913700601f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d96ec4eb1e5cdb491de90ce68342ac23de8042699b5238f65f4583e9dc6e6e,PodSandboxId:179aac9d5c9fbd3a563d8a6fdaf5b99b0657efb5542b78e9b1422aa7d1d845a6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737576212626034849,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e91eb0-f8cb-4691-8180-3854ae9f05e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1710a94d8f75242581c7927de0b8cfed2729fbcb5c62ee6b435951fc530e9e,PodSandboxId:cf17b8ff4efc1759f28a6c0318b65829c688107e12c2e328056695da55d64c14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737576200765037430,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba0dc1fc-a288-4472-9efd-a495438aaf68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f074dd12b01f6c2d7be32fc9b4cc15a7e09633ef41250303f9bb4b741fde0569,PodSandboxId:1a9e41785bcd3553e558a821f95ebcf358458c51cfecb73a3455570dc9d8a60b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737576197630233886,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l82r5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812962f6-19c9-455f-b6ac-95c739ebbc05,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ac16747769dfdc0bb27e445ca792c23eae693dd9bff634d25813
8a0f08ee948,PodSandboxId:ea4e05eae14acc1ef3a8e29b8f49bfa235f00f576a8b8423fa1d0d45c6bd54df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737576194550938852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5sqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6e83878-3ae5-4c34-be45-cd9133d33398,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27b625d59f0a3dc6f7a1ba016eda0fe92b5e10d07784039d3318472292de2da,PodSandboxId:dc57dac1
80ba7489f002f85ef995d237cfd5083ddb1156e7baee588e5c84c9c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737576180515292863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752c1168244c4a6b364d806da2f9e17e,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23cc96ca2ccb481a5cebf5b89d989cb39c43caf6970294fdd1f71ae142c3c366,PodSandboxId:50ef6552fe7e0dd05a0f7ade9
5b73c0ab24d9bbbd6eef9a06418c62cfcc16b91,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737576180538286820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f580f6c3618cbf92b0d9c85c8224c689,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5773766167b6364ccaf40cf36c210fbddf6d30e38761dd913268a6f7fb1fc4,PodSandboxId:83635356ad855dabb6e00b34c0a5d77ec683c846981e0ab661bef9773609b5e0,Metadat
a:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737576180523878120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659d9439c2bea1b6cc18aef8f08e67d9,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9341c92c736b093a1a45a4cd9bdad6e54063edd8de69cef5367eec277041998,PodSandboxId:79dd2d723d06aa99965bc4f4530e9db619fdc49d14f2792ca0eb19dccb0790d
d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737576180511966112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1cb68ad7fccfeed3db9f82c4dbb357,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d075be7-01a9-4e68-9292-3520c28c7ea9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.907916633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40a8a8f0-65e5-42d3-9bfa-2bfca08d2fa1 name=/runtime.v1.RuntimeService/Version
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.907995301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40a8a8f0-65e5-42d3-9bfa-2bfca08d2fa1 name=/runtime.v1.RuntimeService/Version
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.912809124Z" level=debug msg="Ping https://registry-1.docker.io/v2/ status 401" file="docker/docker_client.go:901"
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.913065583Z" level=debug msg="GET https://auth.docker.io/token?scope=repository%3Akicbase%2Fecho-server%3Apull&service=registry.docker.io" file="docker/docker_client.go:861"
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.916038319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a53101c-8247-4904-8db5-388b19da0967 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.917663384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737576532917625064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a53101c-8247-4904-8db5-388b19da0967 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.918636688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69ab4719-6dba-4d45-b21d-e3ec61558fbc name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.918728038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69ab4719-6dba-4d45-b21d-e3ec61558fbc name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 20:08:52 addons-772234 crio[664]: time="2025-01-22 20:08:52.919161927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:378ad966fe8c6d98731272293731d99d06164243ae8649c1da8c375568dcb33e,PodSandboxId:503f6bf30025377ab4c31e6b2662d859179ac56feef5aa50cd4df894908e321e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737576394202107765,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b0101f5-4f0f-44ea-af44-62a0d91ae084,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e58d33575b5148d7188fa4c4c7df9e67b63347d6564dd6fbca0ada1da4847521,PodSandboxId:3414637a9fd676cee0a79820c655119a160f876e8b4cd791cc1ad59fc7d85589,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737576353046131722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2a71b7-8b58-4680-a61c-accbf7d6a820,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cfdbe0d4397cf8839fb79b5dc0b33bf6ad7d9476326d98976441e02a70a57b,PodSandboxId:cfddd391bb5e2130e9a60cbb5c5d52c0e62210914307963d6e132450a4f3bdbe,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737576279030202206,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-qvz4c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 908057ab-3e57-464a-ac34-aa4153e708a7,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b21719f3fe46b613ec99e633ccbcb1d2f6c846b05f98a22e1d012bdf3143564f,PodSandboxId:3a2db65d5ea8f81fb12e257857e993957b28e8bbe69977241b981618bc0f1c82,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1737576264556823184,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f9tx6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fea4bbf-eed0-4a68-85e1-39bc026096d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c5e8caeb4bce076f9a340405ebf72668dff2c26719e143017c6738ee10584c,PodSandboxId:674d49c2819c61da6a4cd190af0b36570ea09c379562e5ab7ea7382ec9262cd4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737576263404156191,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vljjt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 400bb0ce-5261-4589-8ed1-8f46f602ca28,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e9eb6a0e43d05ce9ab12402ae8ed6aea429eb7b635c124166d3d51fcf3ea21,PodSandboxId:5975bdb11f892d7b55773ed6181630e7a222575c7e650bda090479e2033d9c01,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737576229026408072,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-m4f7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89feff56-65d3-453c-aec2-2b913700601f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d96ec4eb1e5cdb491de90ce68342ac23de8042699b5238f65f4583e9dc6e6e,PodSandboxId:179aac9d5c9fbd3a563d8a6fdaf5b99b0657efb5542b78e9b1422aa7d1d845a6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737576212626034849,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e91eb0-f8cb-4691-8180-3854ae9f05e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1710a94d8f75242581c7927de0b8cfed2729fbcb5c62ee6b435951fc530e9e,PodSandboxId:cf17b8ff4efc1759f28a6c0318b65829c688107e12c2e328056695da55d64c14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737576200765037430,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba0dc1fc-a288-4472-9efd-a495438aaf68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f074dd12b01f6c2d7be32fc9b4cc15a7e09633ef41250303f9bb4b741fde0569,PodSandboxId:1a9e41785bcd3553e558a821f95ebcf358458c51cfecb73a3455570dc9d8a60b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737576197630233886,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l82r5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812962f6-19c9-455f-b6ac-95c739ebbc05,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ac16747769dfdc0bb27e445ca792c23eae693dd9bff634d25813
8a0f08ee948,PodSandboxId:ea4e05eae14acc1ef3a8e29b8f49bfa235f00f576a8b8423fa1d0d45c6bd54df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737576194550938852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z5sqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6e83878-3ae5-4c34-be45-cd9133d33398,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27b625d59f0a3dc6f7a1ba016eda0fe92b5e10d07784039d3318472292de2da,PodSandboxId:dc57dac1
80ba7489f002f85ef995d237cfd5083ddb1156e7baee588e5c84c9c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737576180515292863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752c1168244c4a6b364d806da2f9e17e,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23cc96ca2ccb481a5cebf5b89d989cb39c43caf6970294fdd1f71ae142c3c366,PodSandboxId:50ef6552fe7e0dd05a0f7ade9
5b73c0ab24d9bbbd6eef9a06418c62cfcc16b91,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737576180538286820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f580f6c3618cbf92b0d9c85c8224c689,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5773766167b6364ccaf40cf36c210fbddf6d30e38761dd913268a6f7fb1fc4,PodSandboxId:83635356ad855dabb6e00b34c0a5d77ec683c846981e0ab661bef9773609b5e0,Metadat
a:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737576180523878120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659d9439c2bea1b6cc18aef8f08e67d9,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9341c92c736b093a1a45a4cd9bdad6e54063edd8de69cef5367eec277041998,PodSandboxId:79dd2d723d06aa99965bc4f4530e9db619fdc49d14f2792ca0eb19dccb0790d
d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737576180511966112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-772234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1cb68ad7fccfeed3db9f82c4dbb357,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69ab4719-6dba-4d45-b21d-e3ec61558fbc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	378ad966fe8c6       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                              2 minutes ago       Running             nginx                     0                   503f6bf300253       nginx
	e58d33575b514       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   3414637a9fd67       busybox
	c3cfdbe0d4397       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             4 minutes ago       Running             controller                0                   cfddd391bb5e2       ingress-nginx-controller-56d7c84fd4-qvz4c
	b21719f3fe46b       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             4 minutes ago       Exited              patch                     1                   3a2db65d5ea8f       ingress-nginx-admission-patch-f9tx6
	57c5e8caeb4bc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   674d49c2819c6       ingress-nginx-admission-create-vljjt
	32e9eb6a0e43d       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   5975bdb11f892       amd-gpu-device-plugin-m4f7k
	a0d96ec4eb1e5       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   179aac9d5c9fb       kube-ingress-dns-minikube
	db1710a94d8f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   cf17b8ff4efc1       storage-provisioner
	f074dd12b01f6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   1a9e41785bcd3       coredns-668d6bf9bc-l82r5
	7ac16747769df       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             5 minutes ago       Running             kube-proxy                0                   ea4e05eae14ac       kube-proxy-z5sqk
	23cc96ca2ccb4       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             5 minutes ago       Running             etcd                      0                   50ef6552fe7e0       etcd-addons-772234
	7b5773766167b       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             5 minutes ago       Running             kube-controller-manager   0                   83635356ad855       kube-controller-manager-addons-772234
	d27b625d59f0a       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             5 minutes ago       Running             kube-scheduler            0                   dc57dac180ba7       kube-scheduler-addons-772234
	c9341c92c736b       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             5 minutes ago       Running             kube-apiserver            0                   79dd2d723d06a       kube-apiserver-addons-772234
	
	
	==> coredns [f074dd12b01f6c2d7be32fc9b4cc15a7e09633ef41250303f9bb4b741fde0569] <==
	[INFO] 10.244.0.8:40781 - 42965 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000223904s
	[INFO] 10.244.0.8:40781 - 13804 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000095204s
	[INFO] 10.244.0.8:40781 - 21323 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000189902s
	[INFO] 10.244.0.8:40781 - 14484 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00013221s
	[INFO] 10.244.0.8:40781 - 52426 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000077488s
	[INFO] 10.244.0.8:40781 - 26753 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000127824s
	[INFO] 10.244.0.8:40781 - 58846 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000111667s
	[INFO] 10.244.0.8:36151 - 38596 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000195588s
	[INFO] 10.244.0.8:36151 - 38307 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000472943s
	[INFO] 10.244.0.8:45441 - 46554 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115733s
	[INFO] 10.244.0.8:45441 - 46311 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125528s
	[INFO] 10.244.0.8:55874 - 58529 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074229s
	[INFO] 10.244.0.8:55874 - 58296 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000259789s
	[INFO] 10.244.0.8:58343 - 11326 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000079568s
	[INFO] 10.244.0.8:58343 - 11155 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101818s
	[INFO] 10.244.0.23:46744 - 60190 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000632529s
	[INFO] 10.244.0.23:33274 - 35992 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000159829s
	[INFO] 10.244.0.23:35169 - 52852 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000187984s
	[INFO] 10.244.0.23:43785 - 40838 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001882s
	[INFO] 10.244.0.23:55805 - 52086 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167083s
	[INFO] 10.244.0.23:48508 - 39133 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000080015s
	[INFO] 10.244.0.23:37576 - 42377 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004481893s
	[INFO] 10.244.0.23:59377 - 35638 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.004905162s
	[INFO] 10.244.0.27:54179 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000380693s
	[INFO] 10.244.0.27:43010 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000182582s
	
	
	==> describe nodes <==
	Name:               addons-772234
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-772234
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4
	                    minikube.k8s.io/name=addons-772234
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_22T20_03_07_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-772234
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 Jan 2025 20:03:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-772234
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 Jan 2025 20:08:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 Jan 2025 20:07:11 +0000   Wed, 22 Jan 2025 20:03:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 Jan 2025 20:07:11 +0000   Wed, 22 Jan 2025 20:03:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 Jan 2025 20:07:11 +0000   Wed, 22 Jan 2025 20:03:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 Jan 2025 20:07:11 +0000   Wed, 22 Jan 2025 20:03:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    addons-772234
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d49f8ff7eec46879944b636848d5d4b
	  System UUID:                2d49f8ff-7eec-4687-9944-b636848d5d4b
	  Boot ID:                    c273de28-a8cf-4b92-b807-6e1a7a726735
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     hello-world-app-7d9564db4-4s5cm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-qvz4c    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m31s
	  kube-system                 amd-gpu-device-plugin-m4f7k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 coredns-668d6bf9bc-l82r5                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m41s
	  kube-system                 etcd-addons-772234                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m47s
	  kube-system                 kube-apiserver-addons-772234                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-controller-manager-addons-772234        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-proxy-z5sqk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 kube-scheduler-addons-772234                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m37s  kube-proxy       
	  Normal  Starting                 5m47s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m47s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m47s  kubelet          Node addons-772234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m47s  kubelet          Node addons-772234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s  kubelet          Node addons-772234 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m46s  kubelet          Node addons-772234 status is now: NodeReady
	  Normal  RegisteredNode           5m43s  node-controller  Node addons-772234 event: Registered Node addons-772234 in Controller
	
	
	==> dmesg <==
	[  +0.083701] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.218934] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.658846] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[  +4.474730] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.095263] kauditd_printk_skb: 128 callbacks suppressed
	[  +5.018626] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.977509] kauditd_printk_skb: 59 callbacks suppressed
	[Jan22 20:04] kauditd_printk_skb: 4 callbacks suppressed
	[ +23.850136] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.474919] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.171516] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.313955] kauditd_printk_skb: 13 callbacks suppressed
	[  +9.961981] kauditd_printk_skb: 12 callbacks suppressed
	[Jan22 20:05] kauditd_printk_skb: 9 callbacks suppressed
	[Jan22 20:06] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.410386] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.027228] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.505315] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.039728] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.143143] kauditd_printk_skb: 51 callbacks suppressed
	[ +10.107261] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.480914] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.581375] kauditd_printk_skb: 7 callbacks suppressed
	[Jan22 20:07] kauditd_printk_skb: 22 callbacks suppressed
	[Jan22 20:08] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [23cc96ca2ccb481a5cebf5b89d989cb39c43caf6970294fdd1f71ae142c3c366] <==
	{"level":"warn","ts":"2025-01-22T20:06:14.969853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.074557ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-01-22T20:06:14.969882Z","caller":"traceutil/trace.go:171","msg":"trace[1882360745] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1398; }","duration":"148.172286ms","start":"2025-01-22T20:06:14.821700Z","end":"2025-01-22T20:06:14.969872Z","steps":["trace[1882360745] 'agreement among raft nodes before linearized reading'  (duration: 147.978841ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T20:06:15.420809Z","caller":"traceutil/trace.go:171","msg":"trace[1949924900] linearizableReadLoop","detail":"{readStateIndex:1464; appliedIndex:1463; }","duration":"223.064053ms","start":"2025-01-22T20:06:15.197691Z","end":"2025-01-22T20:06:15.420755Z","steps":["trace[1949924900] 'read index received'  (duration: 218.705675ms)","trace[1949924900] 'applied index is now lower than readState.Index'  (duration: 4.357862ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-22T20:06:15.421037Z","caller":"traceutil/trace.go:171","msg":"trace[1960355980] transaction","detail":"{read_only:false; response_revision:1405; number_of_response:1; }","duration":"234.380548ms","start":"2025-01-22T20:06:15.186647Z","end":"2025-01-22T20:06:15.421027Z","steps":["trace[1960355980] 'process raft request'  (duration: 229.828869ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T20:06:15.421314Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.603829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T20:06:15.421341Z","caller":"traceutil/trace.go:171","msg":"trace[1250343245] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:1405; }","duration":"223.647635ms","start":"2025-01-22T20:06:15.197686Z","end":"2025-01-22T20:06:15.421333Z","steps":["trace[1250343245] 'agreement among raft nodes before linearized reading'  (duration: 223.550722ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T20:06:15.421582Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.687916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:1 size:3395"}
	{"level":"info","ts":"2025-01-22T20:06:15.421605Z","caller":"traceutil/trace.go:171","msg":"trace[1377381182] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:1; response_revision:1405; }","duration":"186.738852ms","start":"2025-01-22T20:06:15.234860Z","end":"2025-01-22T20:06:15.421599Z","steps":["trace[1377381182] 'agreement among raft nodes before linearized reading'  (duration: 186.573334ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T20:06:15.421820Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.202458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T20:06:15.421866Z","caller":"traceutil/trace.go:171","msg":"trace[1415106888] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1405; }","duration":"137.267519ms","start":"2025-01-22T20:06:15.284590Z","end":"2025-01-22T20:06:15.421858Z","steps":["trace[1415106888] 'agreement among raft nodes before linearized reading'  (duration: 137.213982ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T20:06:15.421947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.458426ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T20:06:15.421982Z","caller":"traceutil/trace.go:171","msg":"trace[1304201416] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1405; }","duration":"164.513416ms","start":"2025-01-22T20:06:15.257464Z","end":"2025-01-22T20:06:15.421977Z","steps":["trace[1304201416] 'agreement among raft nodes before linearized reading'  (duration: 164.469611ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T20:06:38.726971Z","caller":"traceutil/trace.go:171","msg":"trace[267724027] linearizableReadLoop","detail":"{readStateIndex:1718; appliedIndex:1717; }","duration":"260.310527ms","start":"2025-01-22T20:06:38.466640Z","end":"2025-01-22T20:06:38.726951Z","steps":["trace[267724027] 'read index received'  (duration: 260.178526ms)","trace[267724027] 'applied index is now lower than readState.Index'  (duration: 131.595µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-22T20:06:38.727244Z","caller":"traceutil/trace.go:171","msg":"trace[2068020219] transaction","detail":"{read_only:false; response_revision:1649; number_of_response:1; }","duration":"288.308589ms","start":"2025-01-22T20:06:38.438924Z","end":"2025-01-22T20:06:38.727233Z","steps":["trace[2068020219] 'process raft request'  (duration: 287.945256ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T20:06:38.727456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.7822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-01-22T20:06:38.727478Z","caller":"traceutil/trace.go:171","msg":"trace[1213559694] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:1649; }","duration":"260.860652ms","start":"2025-01-22T20:06:38.466612Z","end":"2025-01-22T20:06:38.727472Z","steps":["trace[1213559694] 'agreement among raft nodes before linearized reading'  (duration: 260.783234ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T20:06:38.727657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.923645ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T20:06:38.727678Z","caller":"traceutil/trace.go:171","msg":"trace[1084547587] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1649; }","duration":"259.960441ms","start":"2025-01-22T20:06:38.467707Z","end":"2025-01-22T20:06:38.727668Z","steps":["trace[1084547587] 'agreement among raft nodes before linearized reading'  (duration: 259.930643ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T20:06:38.727886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.331271ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T20:06:38.727902Z","caller":"traceutil/trace.go:171","msg":"trace[1114925016] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1649; }","duration":"214.3494ms","start":"2025-01-22T20:06:38.513548Z","end":"2025-01-22T20:06:38.727898Z","steps":["trace[1114925016] 'agreement among raft nodes before linearized reading'  (duration: 214.325517ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T20:06:40.611912Z","caller":"traceutil/trace.go:171","msg":"trace[1078996855] linearizableReadLoop","detail":"{readStateIndex:1731; appliedIndex:1730; }","duration":"145.881138ms","start":"2025-01-22T20:06:40.466014Z","end":"2025-01-22T20:06:40.611895Z","steps":["trace[1078996855] 'read index received'  (duration: 114.375987ms)","trace[1078996855] 'applied index is now lower than readState.Index'  (duration: 31.504168ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-22T20:06:40.612048Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.156916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-01-22T20:06:40.612045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.010167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T20:06:40.612067Z","caller":"traceutil/trace.go:171","msg":"trace[147415083] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1661; }","duration":"112.218464ms","start":"2025-01-22T20:06:40.499844Z","end":"2025-01-22T20:06:40.612062Z","steps":["trace[147415083] 'agreement among raft nodes before linearized reading'  (duration: 112.167875ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T20:06:40.612087Z","caller":"traceutil/trace.go:171","msg":"trace[1720974570] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1661; }","duration":"146.088588ms","start":"2025-01-22T20:06:40.465986Z","end":"2025-01-22T20:06:40.612074Z","steps":["trace[1720974570] 'agreement among raft nodes before linearized reading'  (duration: 146.002209ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:08:53 up 6 min,  0 users,  load average: 0.81, 1.44, 0.79
	Linux addons-772234 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c9341c92c736b093a1a45a4cd9bdad6e54063edd8de69cef5367eec277041998] <==
	 > logger="UnhandledError"
	I0122 20:04:13.376117       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0122 20:04:13.411930       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0122 20:05:59.388914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:49190: use of closed network connection
	E0122 20:05:59.607366       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:49206: use of closed network connection
	I0122 20:06:09.182820       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.177.80"}
	I0122 20:06:14.362304       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0122 20:06:28.172281       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0122 20:06:29.366830       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0122 20:06:29.402976       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0122 20:06:29.638742       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.1.86"}
	E0122 20:06:44.766836       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0122 20:06:47.186682       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0122 20:07:02.920618       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0122 20:07:02.920754       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0122 20:07:02.947883       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0122 20:07:02.949474       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0122 20:07:03.022612       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0122 20:07:03.022692       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0122 20:07:03.069746       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0122 20:07:03.069796       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0122 20:07:04.023782       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0122 20:07:04.071349       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0122 20:07:04.151817       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0122 20:08:51.540931       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.95.124"}
	
	
	==> kube-controller-manager [7b5773766167b6364ccaf40cf36c210fbddf6d30e38761dd913268a6f7fb1fc4] <==
	W0122 20:07:47.767619       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0122 20:07:47.767675       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0122 20:07:47.772965       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0122 20:07:47.774059       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0122 20:07:47.775171       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0122 20:07:47.775253       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0122 20:08:17.282774       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0122 20:08:17.284311       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0122 20:08:17.285581       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0122 20:08:17.285653       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0122 20:08:18.602056       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0122 20:08:18.603769       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0122 20:08:18.605033       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0122 20:08:18.605135       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0122 20:08:30.276173       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0122 20:08:30.277376       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0122 20:08:30.278701       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0122 20:08:30.278763       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0122 20:08:34.055241       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0122 20:08:34.056354       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0122 20:08:34.057438       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0122 20:08:34.057564       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0122 20:08:51.325634       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="50.629421ms"
	I0122 20:08:51.341209       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="15.168987ms"
	I0122 20:08:51.341465       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="45.811µs"
	
	
	==> kube-proxy [7ac16747769dfdc0bb27e445ca792c23eae693dd9bff634d258138a0f08ee948] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0122 20:03:15.332859       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0122 20:03:15.382778       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	E0122 20:03:15.382869       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0122 20:03:15.797157       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0122 20:03:15.797213       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0122 20:03:15.797239       1 server_linux.go:170] "Using iptables Proxier"
	I0122 20:03:15.972795       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0122 20:03:15.973156       1 server.go:497] "Version info" version="v1.32.1"
	I0122 20:03:15.973191       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0122 20:03:16.001215       1 config.go:199] "Starting service config controller"
	I0122 20:03:16.023925       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0122 20:03:16.013179       1 config.go:105] "Starting endpoint slice config controller"
	I0122 20:03:16.024043       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0122 20:03:16.013974       1 config.go:329] "Starting node config controller"
	I0122 20:03:16.024175       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0122 20:03:16.134361       1 shared_informer.go:320] Caches are synced for node config
	I0122 20:03:16.134395       1 shared_informer.go:320] Caches are synced for service config
	I0122 20:03:16.134405       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d27b625d59f0a3dc6f7a1ba016eda0fe92b5e10d07784039d3318472292de2da] <==
	W0122 20:03:04.246832       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0122 20:03:04.246939       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0122 20:03:04.286785       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0122 20:03:04.286906       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 20:03:04.550923       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0122 20:03:04.550981       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 20:03:04.556860       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0122 20:03:04.558971       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 20:03:04.632570       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0122 20:03:04.632627       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 20:03:04.659568       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0122 20:03:04.659706       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 20:03:04.735316       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0122 20:03:04.735370       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 20:03:04.799712       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0122 20:03:04.799780       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 20:03:04.840364       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0122 20:03:04.840418       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 20:03:04.875896       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0122 20:03:04.876020       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0122 20:03:04.916033       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0122 20:03:04.916095       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0122 20:03:04.924250       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0122 20:03:04.924305       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0122 20:03:07.575495       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 22 20:08:06 addons-772234 kubelet[1231]: E0122 20:08:06.921710    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737576486921185518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 20:08:16 addons-772234 kubelet[1231]: E0122 20:08:16.924596    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737576496923905735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 20:08:16 addons-772234 kubelet[1231]: E0122 20:08:16.924649    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737576496923905735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 20:08:22 addons-772234 kubelet[1231]: I0122 20:08:22.330174    1231 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jan 22 20:08:26 addons-772234 kubelet[1231]: E0122 20:08:26.927800    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737576506926911692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 20:08:26 addons-772234 kubelet[1231]: E0122 20:08:26.927864    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737576506926911692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 20:08:36 addons-772234 kubelet[1231]: E0122 20:08:36.934216    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737576516933260480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 20:08:36 addons-772234 kubelet[1231]: E0122 20:08:36.934736    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737576516933260480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 20:08:45 addons-772234 kubelet[1231]: I0122 20:08:45.326192    1231 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-m4f7k" secret="" err="secret \"gcp-auth\" not found"
	Jan 22 20:08:46 addons-772234 kubelet[1231]: E0122 20:08:46.937920    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737576526937452607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 20:08:46 addons-772234 kubelet[1231]: E0122 20:08:46.937976    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737576526937452607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.314274    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="2e888ff0-0853-4f35-85af-9483d8e8996b" containerName="volume-snapshot-controller"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.314764    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="bbf8a63e-71be-4a91-953f-d82996dad359" containerName="liveness-probe"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.314825    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="bbf8a63e-71be-4a91-953f-d82996dad359" containerName="csi-external-health-monitor-controller"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.314861    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="bbf8a63e-71be-4a91-953f-d82996dad359" containerName="node-driver-registrar"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.314894    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="5c3cc19d-11fd-4d38-a853-0bb58100a9d8" containerName="task-pv-container"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.314929    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="5546f4b8-d18e-4914-8335-208c5695ecaa" containerName="csi-resizer"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.314972    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="bbf8a63e-71be-4a91-953f-d82996dad359" containerName="csi-snapshotter"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.315004    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="bbf8a63e-71be-4a91-953f-d82996dad359" containerName="hostpath"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.315034    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="bbf8a63e-71be-4a91-953f-d82996dad359" containerName="csi-provisioner"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.315090    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="93a42413-3ef1-49d2-a0df-62b0e9a319de" containerName="csi-attacher"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.315122    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="066c356a-79a1-4fa4-ba42-ce1a378408fb" containerName="local-path-provisioner"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.315165    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="ac3cf146-3878-4a7b-be98-c7589eb53409" containerName="volume-snapshot-controller"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.318892    1231 status_manager.go:890] "Failed to get status for pod" podUID="76b851cd-b91e-4995-b070-763b868a6c9d" pod="default/hello-world-app-7d9564db4-4s5cm" err="pods \"hello-world-app-7d9564db4-4s5cm\" is forbidden: User \"system:node:addons-772234\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-772234' and this object"
	Jan 22 20:08:51 addons-772234 kubelet[1231]: I0122 20:08:51.396600    1231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6vpr\" (UniqueName: \"kubernetes.io/projected/76b851cd-b91e-4995-b070-763b868a6c9d-kube-api-access-r6vpr\") pod \"hello-world-app-7d9564db4-4s5cm\" (UID: \"76b851cd-b91e-4995-b070-763b868a6c9d\") " pod="default/hello-world-app-7d9564db4-4s5cm"
	
	
	==> storage-provisioner [db1710a94d8f75242581c7927de0b8cfed2729fbcb5c62ee6b435951fc530e9e] <==
	I0122 20:03:21.963329       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0122 20:03:22.028069       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0122 20:03:22.028159       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0122 20:03:22.100696       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0122 20:03:22.100928       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-772234_29c6f05c-e7ac-4b11-8ffe-9889dcf34faa!
	I0122 20:03:22.101887       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c26f8d74-515f-4312-b217-9f6f2ebcd193", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-772234_29c6f05c-e7ac-4b11-8ffe-9889dcf34faa became leader
	I0122 20:03:22.201912       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-772234_29c6f05c-e7ac-4b11-8ffe-9889dcf34faa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-772234 -n addons-772234
helpers_test.go:261: (dbg) Run:  kubectl --context addons-772234 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-4s5cm ingress-nginx-admission-create-vljjt ingress-nginx-admission-patch-f9tx6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-772234 describe pod hello-world-app-7d9564db4-4s5cm ingress-nginx-admission-create-vljjt ingress-nginx-admission-patch-f9tx6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-772234 describe pod hello-world-app-7d9564db4-4s5cm ingress-nginx-admission-create-vljjt ingress-nginx-admission-patch-f9tx6: exit status 1 (76.361984ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-4s5cm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-772234/
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Image:        docker.io/kicbase/echo-server:1.0
	    Port:         8080/TCP
	    Host Port:    0/TCP
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r6vpr (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-r6vpr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-4s5cm to addons-772234
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     0s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 2.102s (2.102s including waiting). Image size: 4944818 bytes.
	  Normal  Created    0s    kubelet            Created container: hello-world-app
	  Normal  Started    0s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-vljjt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-f9tx6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-772234 describe pod hello-world-app-7d9564db4-4s5cm ingress-nginx-admission-create-vljjt ingress-nginx-admission-patch-f9tx6: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-772234 addons disable ingress-dns --alsologtostderr -v=1: (1.417869368s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-772234 addons disable ingress --alsologtostderr -v=1: (7.88039747s)
--- FAIL: TestAddons/parallel/Ingress (154.60s)

                                                
                                    
x
+
TestPreload (293s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-074508 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0122 21:00:33.340067  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:00:50.258041  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-074508 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m14.47454174s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-074508 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-074508 image pull gcr.io/k8s-minikube/busybox: (2.477807971s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-074508
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-074508: (1m31.012815233s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-074508 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0122 21:04:04.377146  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-074508 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.790973049s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-074508 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-01-22 21:04:15.84499525 +0000 UTC m=+3731.242819302
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-074508 -n test-preload-074508
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-074508 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-074508 logs -n 25: (1.25701241s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-330484 ssh -n                                                                 | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:45 UTC | 22 Jan 25 20:45 UTC |
	|         | multinode-330484-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-330484 ssh -n multinode-330484 sudo cat                                       | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:45 UTC | 22 Jan 25 20:45 UTC |
	|         | /home/docker/cp-test_multinode-330484-m03_multinode-330484.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-330484 cp multinode-330484-m03:/home/docker/cp-test.txt                       | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:45 UTC | 22 Jan 25 20:45 UTC |
	|         | multinode-330484-m02:/home/docker/cp-test_multinode-330484-m03_multinode-330484-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-330484 ssh -n                                                                 | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:45 UTC | 22 Jan 25 20:45 UTC |
	|         | multinode-330484-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-330484 ssh -n multinode-330484-m02 sudo cat                                   | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:45 UTC | 22 Jan 25 20:45 UTC |
	|         | /home/docker/cp-test_multinode-330484-m03_multinode-330484-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-330484 node stop m03                                                          | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:45 UTC | 22 Jan 25 20:45 UTC |
	| node    | multinode-330484 node start                                                             | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:46 UTC | 22 Jan 25 20:46 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-330484                                                                | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:46 UTC |                     |
	| stop    | -p multinode-330484                                                                     | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:46 UTC | 22 Jan 25 20:49 UTC |
	| start   | -p multinode-330484                                                                     | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:49 UTC | 22 Jan 25 20:52 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-330484                                                                | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:52 UTC |                     |
	| node    | multinode-330484 node delete                                                            | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:52 UTC | 22 Jan 25 20:52 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-330484 stop                                                                   | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:52 UTC | 22 Jan 25 20:55 UTC |
	| start   | -p multinode-330484                                                                     | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:55 UTC | 22 Jan 25 20:58 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-330484                                                                | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:58 UTC |                     |
	| start   | -p multinode-330484-m02                                                                 | multinode-330484-m02 | jenkins | v1.35.0 | 22 Jan 25 20:58 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-330484-m03                                                                 | multinode-330484-m03 | jenkins | v1.35.0 | 22 Jan 25 20:58 UTC | 22 Jan 25 20:59 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-330484                                                                 | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:59 UTC |                     |
	| delete  | -p multinode-330484-m03                                                                 | multinode-330484-m03 | jenkins | v1.35.0 | 22 Jan 25 20:59 UTC | 22 Jan 25 20:59 UTC |
	| delete  | -p multinode-330484                                                                     | multinode-330484     | jenkins | v1.35.0 | 22 Jan 25 20:59 UTC | 22 Jan 25 20:59 UTC |
	| start   | -p test-preload-074508                                                                  | test-preload-074508  | jenkins | v1.35.0 | 22 Jan 25 20:59 UTC | 22 Jan 25 21:01 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-074508 image pull                                                          | test-preload-074508  | jenkins | v1.35.0 | 22 Jan 25 21:01 UTC | 22 Jan 25 21:01 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-074508                                                                  | test-preload-074508  | jenkins | v1.35.0 | 22 Jan 25 21:01 UTC | 22 Jan 25 21:03 UTC |
	| start   | -p test-preload-074508                                                                  | test-preload-074508  | jenkins | v1.35.0 | 22 Jan 25 21:03 UTC | 22 Jan 25 21:04 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-074508 image list                                                          | test-preload-074508  | jenkins | v1.35.0 | 22 Jan 25 21:04 UTC | 22 Jan 25 21:04 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 21:03:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 21:03:13.863875  287008 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:03:13.864034  287008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:03:13.864046  287008 out.go:358] Setting ErrFile to fd 2...
	I0122 21:03:13.864053  287008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:03:13.864317  287008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:03:13.865001  287008 out.go:352] Setting JSON to false
	I0122 21:03:13.866032  287008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":13540,"bootTime":1737566254,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:03:13.866139  287008 start.go:139] virtualization: kvm guest
	I0122 21:03:13.868644  287008 out.go:177] * [test-preload-074508] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:03:13.870692  287008 notify.go:220] Checking for updates...
	I0122 21:03:13.870755  287008 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:03:13.872310  287008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:03:13.873799  287008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:03:13.875179  287008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:03:13.876502  287008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:03:13.877944  287008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:03:13.879930  287008 config.go:182] Loaded profile config "test-preload-074508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0122 21:03:13.880383  287008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:03:13.880436  287008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:03:13.896731  287008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46193
	I0122 21:03:13.897291  287008 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:03:13.897940  287008 main.go:141] libmachine: Using API Version  1
	I0122 21:03:13.897969  287008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:03:13.898408  287008 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:03:13.898691  287008 main.go:141] libmachine: (test-preload-074508) Calling .DriverName
	I0122 21:03:13.900801  287008 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0122 21:03:13.902149  287008 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:03:13.902676  287008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:03:13.902745  287008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:03:13.918717  287008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42455
	I0122 21:03:13.919149  287008 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:03:13.919646  287008 main.go:141] libmachine: Using API Version  1
	I0122 21:03:13.919682  287008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:03:13.920014  287008 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:03:13.920242  287008 main.go:141] libmachine: (test-preload-074508) Calling .DriverName
	I0122 21:03:13.959382  287008 out.go:177] * Using the kvm2 driver based on existing profile
	I0122 21:03:13.960683  287008 start.go:297] selected driver: kvm2
	I0122 21:03:13.960702  287008 start.go:901] validating driver "kvm2" against &{Name:test-preload-074508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-074508
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:03:13.960837  287008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:03:13.961577  287008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:03:13.961672  287008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:03:13.977805  287008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:03:13.978273  287008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:03:13.978316  287008 cni.go:84] Creating CNI manager for ""
	I0122 21:03:13.978369  287008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:03:13.978443  287008 start.go:340] cluster config:
	{Name:test-preload-074508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-074508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:03:13.978552  287008 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:03:13.981163  287008 out.go:177] * Starting "test-preload-074508" primary control-plane node in "test-preload-074508" cluster
	I0122 21:03:13.982545  287008 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0122 21:03:14.011495  287008 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0122 21:03:14.011535  287008 cache.go:56] Caching tarball of preloaded images
	I0122 21:03:14.011743  287008 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0122 21:03:14.013615  287008 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0122 21:03:14.014795  287008 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0122 21:03:14.040618  287008 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0122 21:03:17.677515  287008 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0122 21:03:17.677620  287008 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0122 21:03:18.560916  287008 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0122 21:03:18.561075  287008 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/config.json ...
	I0122 21:03:18.561321  287008 start.go:360] acquireMachinesLock for test-preload-074508: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:03:18.561386  287008 start.go:364] duration metric: took 40.7µs to acquireMachinesLock for "test-preload-074508"
	I0122 21:03:18.561401  287008 start.go:96] Skipping create...Using existing machine configuration
	I0122 21:03:18.561426  287008 fix.go:54] fixHost starting: 
	I0122 21:03:18.561716  287008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:03:18.561768  287008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:03:18.577747  287008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I0122 21:03:18.578285  287008 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:03:18.578822  287008 main.go:141] libmachine: Using API Version  1
	I0122 21:03:18.578850  287008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:03:18.579221  287008 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:03:18.579458  287008 main.go:141] libmachine: (test-preload-074508) Calling .DriverName
	I0122 21:03:18.579619  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetState
	I0122 21:03:18.581491  287008 fix.go:112] recreateIfNeeded on test-preload-074508: state=Stopped err=<nil>
	I0122 21:03:18.581529  287008 main.go:141] libmachine: (test-preload-074508) Calling .DriverName
	W0122 21:03:18.581710  287008 fix.go:138] unexpected machine state, will restart: <nil>
	I0122 21:03:18.584005  287008 out.go:177] * Restarting existing kvm2 VM for "test-preload-074508" ...
	I0122 21:03:18.585250  287008 main.go:141] libmachine: (test-preload-074508) Calling .Start
	I0122 21:03:18.585573  287008 main.go:141] libmachine: (test-preload-074508) starting domain...
	I0122 21:03:18.585605  287008 main.go:141] libmachine: (test-preload-074508) ensuring networks are active...
	I0122 21:03:18.586545  287008 main.go:141] libmachine: (test-preload-074508) Ensuring network default is active
	I0122 21:03:18.586869  287008 main.go:141] libmachine: (test-preload-074508) Ensuring network mk-test-preload-074508 is active
	I0122 21:03:18.587259  287008 main.go:141] libmachine: (test-preload-074508) getting domain XML...
	I0122 21:03:18.587999  287008 main.go:141] libmachine: (test-preload-074508) creating domain...
	I0122 21:03:19.863766  287008 main.go:141] libmachine: (test-preload-074508) waiting for IP...
	I0122 21:03:19.864847  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:19.865307  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:19.865421  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:19.865284  287059 retry.go:31] will retry after 244.493061ms: waiting for domain to come up
	I0122 21:03:20.111945  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:20.112452  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:20.112479  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:20.112386  287059 retry.go:31] will retry after 290.682634ms: waiting for domain to come up
	I0122 21:03:20.405054  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:20.405520  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:20.405553  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:20.405465  287059 retry.go:31] will retry after 416.929375ms: waiting for domain to come up
	I0122 21:03:20.824320  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:20.824773  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:20.824809  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:20.824724  287059 retry.go:31] will retry after 455.95903ms: waiting for domain to come up
	I0122 21:03:21.282607  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:21.283087  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:21.283126  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:21.283042  287059 retry.go:31] will retry after 753.662866ms: waiting for domain to come up
	I0122 21:03:22.038117  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:22.038648  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:22.038690  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:22.038645  287059 retry.go:31] will retry after 800.053976ms: waiting for domain to come up
	I0122 21:03:22.840673  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:22.841200  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:22.841233  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:22.841137  287059 retry.go:31] will retry after 1.130825002s: waiting for domain to come up
	I0122 21:03:23.973356  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:23.973911  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:23.973978  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:23.973804  287059 retry.go:31] will retry after 1.459295847s: waiting for domain to come up
	I0122 21:03:25.435711  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:25.436284  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:25.436322  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:25.436238  287059 retry.go:31] will retry after 1.441064954s: waiting for domain to come up
	I0122 21:03:26.879999  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:26.880484  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:26.880546  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:26.880438  287059 retry.go:31] will retry after 2.095245855s: waiting for domain to come up
	I0122 21:03:28.977294  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:28.977844  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:28.977895  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:28.977820  287059 retry.go:31] will retry after 1.813010986s: waiting for domain to come up
	I0122 21:03:30.792881  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:30.793314  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:30.793345  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:30.793274  287059 retry.go:31] will retry after 2.388129171s: waiting for domain to come up
	I0122 21:03:33.185148  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:33.185616  287008 main.go:141] libmachine: (test-preload-074508) DBG | unable to find current IP address of domain test-preload-074508 in network mk-test-preload-074508
	I0122 21:03:33.185642  287008 main.go:141] libmachine: (test-preload-074508) DBG | I0122 21:03:33.185587  287059 retry.go:31] will retry after 3.277460541s: waiting for domain to come up
	I0122 21:03:36.464484  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.464996  287008 main.go:141] libmachine: (test-preload-074508) found domain IP: 192.168.39.34
	I0122 21:03:36.465021  287008 main.go:141] libmachine: (test-preload-074508) reserving static IP address...
	I0122 21:03:36.465040  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has current primary IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.465502  287008 main.go:141] libmachine: (test-preload-074508) reserved static IP address 192.168.39.34 for domain test-preload-074508
	I0122 21:03:36.465545  287008 main.go:141] libmachine: (test-preload-074508) waiting for SSH...
	I0122 21:03:36.465566  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "test-preload-074508", mac: "52:54:00:4f:7d:28", ip: "192.168.39.34"} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:36.465591  287008 main.go:141] libmachine: (test-preload-074508) DBG | skip adding static IP to network mk-test-preload-074508 - found existing host DHCP lease matching {name: "test-preload-074508", mac: "52:54:00:4f:7d:28", ip: "192.168.39.34"}
	I0122 21:03:36.465608  287008 main.go:141] libmachine: (test-preload-074508) DBG | Getting to WaitForSSH function...
	I0122 21:03:36.468002  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.468332  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:36.468377  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.468492  287008 main.go:141] libmachine: (test-preload-074508) DBG | Using SSH client type: external
	I0122 21:03:36.468528  287008 main.go:141] libmachine: (test-preload-074508) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/test-preload-074508/id_rsa (-rw-------)
	I0122 21:03:36.468586  287008 main.go:141] libmachine: (test-preload-074508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/test-preload-074508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:03:36.468609  287008 main.go:141] libmachine: (test-preload-074508) DBG | About to run SSH command:
	I0122 21:03:36.468622  287008 main.go:141] libmachine: (test-preload-074508) DBG | exit 0
	I0122 21:03:36.598999  287008 main.go:141] libmachine: (test-preload-074508) DBG | SSH cmd err, output: <nil>: 
	I0122 21:03:36.599415  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetConfigRaw
	I0122 21:03:36.600079  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetIP
	I0122 21:03:36.602997  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.603312  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:36.603343  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.603652  287008 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/config.json ...
	I0122 21:03:36.603928  287008 machine.go:93] provisionDockerMachine start ...
	I0122 21:03:36.603953  287008 main.go:141] libmachine: (test-preload-074508) Calling .DriverName
	I0122 21:03:36.604213  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHHostname
	I0122 21:03:36.606766  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.607124  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:36.607153  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.607345  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHPort
	I0122 21:03:36.607565  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:36.607700  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:36.607851  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHUsername
	I0122 21:03:36.607988  287008 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:36.608260  287008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0122 21:03:36.608273  287008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0122 21:03:36.723375  287008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0122 21:03:36.723406  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetMachineName
	I0122 21:03:36.723710  287008 buildroot.go:166] provisioning hostname "test-preload-074508"
	I0122 21:03:36.723741  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetMachineName
	I0122 21:03:36.723988  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHHostname
	I0122 21:03:36.726688  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.727063  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:36.727087  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.727260  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHPort
	I0122 21:03:36.727462  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:36.727603  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:36.727729  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHUsername
	I0122 21:03:36.727873  287008 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:36.728066  287008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0122 21:03:36.728079  287008 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-074508 && echo "test-preload-074508" | sudo tee /etc/hostname
	I0122 21:03:36.861247  287008 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-074508
	
	I0122 21:03:36.861287  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHHostname
	I0122 21:03:36.864051  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.864369  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:36.864396  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.864673  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHPort
	I0122 21:03:36.864894  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:36.865075  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:36.865204  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHUsername
	I0122 21:03:36.865336  287008 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:36.865519  287008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0122 21:03:36.865542  287008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-074508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-074508/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-074508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:03:36.989249  287008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:03:36.989283  287008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:03:36.989304  287008 buildroot.go:174] setting up certificates
	I0122 21:03:36.989318  287008 provision.go:84] configureAuth start
	I0122 21:03:36.989329  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetMachineName
	I0122 21:03:36.989648  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetIP
	I0122 21:03:36.992365  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.992693  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:36.992734  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.992948  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHHostname
	I0122 21:03:36.995497  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.995840  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:36.995880  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:36.996012  287008 provision.go:143] copyHostCerts
	I0122 21:03:36.996072  287008 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:03:36.996093  287008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:03:36.996161  287008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:03:36.996274  287008 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:03:36.996284  287008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:03:36.996309  287008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:03:36.996364  287008 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:03:36.996371  287008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:03:36.996394  287008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:03:36.996449  287008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.test-preload-074508 san=[127.0.0.1 192.168.39.34 localhost minikube test-preload-074508]
	I0122 21:03:37.085196  287008 provision.go:177] copyRemoteCerts
	I0122 21:03:37.085273  287008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:03:37.085301  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHHostname
	I0122 21:03:37.088017  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.088350  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:37.088395  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.088634  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHPort
	I0122 21:03:37.088815  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:37.088935  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHUsername
	I0122 21:03:37.089038  287008 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/test-preload-074508/id_rsa Username:docker}
	I0122 21:03:37.177795  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0122 21:03:37.205891  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:03:37.233350  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0122 21:03:37.260787  287008 provision.go:87] duration metric: took 271.453037ms to configureAuth
	I0122 21:03:37.260820  287008 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:03:37.261019  287008 config.go:182] Loaded profile config "test-preload-074508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0122 21:03:37.261111  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHHostname
	I0122 21:03:37.263884  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.264214  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:37.264253  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.264471  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHPort
	I0122 21:03:37.264671  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:37.264844  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:37.264988  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHUsername
	I0122 21:03:37.265221  287008 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:37.265415  287008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0122 21:03:37.265435  287008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:03:37.511311  287008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:03:37.511346  287008 machine.go:96] duration metric: took 907.40138ms to provisionDockerMachine
	I0122 21:03:37.511361  287008 start.go:293] postStartSetup for "test-preload-074508" (driver="kvm2")
	I0122 21:03:37.511374  287008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:03:37.511404  287008 main.go:141] libmachine: (test-preload-074508) Calling .DriverName
	I0122 21:03:37.511745  287008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:03:37.511781  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHHostname
	I0122 21:03:37.514442  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.514806  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:37.514838  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.514994  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHPort
	I0122 21:03:37.515235  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:37.515431  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHUsername
	I0122 21:03:37.515592  287008 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/test-preload-074508/id_rsa Username:docker}
	I0122 21:03:37.605530  287008 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:03:37.610545  287008 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:03:37.610580  287008 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:03:37.610672  287008 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:03:37.610775  287008 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:03:37.610921  287008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:03:37.621604  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:03:37.650241  287008 start.go:296] duration metric: took 138.859443ms for postStartSetup
	I0122 21:03:37.650298  287008 fix.go:56] duration metric: took 19.088872565s for fixHost
	I0122 21:03:37.650324  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHHostname
	I0122 21:03:37.653111  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.653497  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:37.653521  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.653772  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHPort
	I0122 21:03:37.653994  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:37.654175  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:37.654347  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHUsername
	I0122 21:03:37.654533  287008 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:37.654715  287008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I0122 21:03:37.654726  287008 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:03:37.771487  287008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737579817.739734050
	
	I0122 21:03:37.771542  287008 fix.go:216] guest clock: 1737579817.739734050
	I0122 21:03:37.771555  287008 fix.go:229] Guest: 2025-01-22 21:03:37.73973405 +0000 UTC Remote: 2025-01-22 21:03:37.65030302 +0000 UTC m=+23.830782404 (delta=89.43103ms)
	I0122 21:03:37.771617  287008 fix.go:200] guest clock delta is within tolerance: 89.43103ms
	I0122 21:03:37.771625  287008 start.go:83] releasing machines lock for "test-preload-074508", held for 19.210228013s
	I0122 21:03:37.771656  287008 main.go:141] libmachine: (test-preload-074508) Calling .DriverName
	I0122 21:03:37.771950  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetIP
	I0122 21:03:37.774649  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.775001  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:37.775026  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.775219  287008 main.go:141] libmachine: (test-preload-074508) Calling .DriverName
	I0122 21:03:37.775788  287008 main.go:141] libmachine: (test-preload-074508) Calling .DriverName
	I0122 21:03:37.775976  287008 main.go:141] libmachine: (test-preload-074508) Calling .DriverName
	I0122 21:03:37.776079  287008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:03:37.776133  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHHostname
	I0122 21:03:37.776195  287008 ssh_runner.go:195] Run: cat /version.json
	I0122 21:03:37.776224  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHHostname
	I0122 21:03:37.778918  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.779207  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.779264  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:37.779288  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.779514  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHPort
	I0122 21:03:37.779594  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:37.779636  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:37.779737  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:37.779794  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHPort
	I0122 21:03:37.779964  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:03:37.779968  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHUsername
	I0122 21:03:37.780146  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHUsername
	I0122 21:03:37.780149  287008 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/test-preload-074508/id_rsa Username:docker}
	I0122 21:03:37.780271  287008 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/test-preload-074508/id_rsa Username:docker}
	I0122 21:03:37.885022  287008 ssh_runner.go:195] Run: systemctl --version
	I0122 21:03:37.892255  287008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:03:38.047183  287008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:03:38.053790  287008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:03:38.053868  287008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:03:38.073692  287008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:03:38.073723  287008 start.go:495] detecting cgroup driver to use...
	I0122 21:03:38.073791  287008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:03:38.092173  287008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:03:38.108910  287008 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:03:38.108982  287008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:03:38.126383  287008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:03:38.142571  287008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:03:38.269553  287008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:03:38.450118  287008 docker.go:233] disabling docker service ...
	I0122 21:03:38.450249  287008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:03:38.466697  287008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:03:38.482000  287008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:03:38.603694  287008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:03:38.747099  287008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:03:38.762755  287008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:03:38.784330  287008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0122 21:03:38.784408  287008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:03:38.796077  287008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:03:38.796161  287008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:03:38.807976  287008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:03:38.820022  287008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:03:38.832074  287008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:03:38.844491  287008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:03:38.856932  287008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:03:38.877475  287008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:03:38.889379  287008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:03:38.899984  287008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:03:38.900052  287008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:03:38.914705  287008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:03:38.926719  287008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:03:39.066275  287008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:03:39.173721  287008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:03:39.173796  287008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:03:39.180770  287008 start.go:563] Will wait 60s for crictl version
	I0122 21:03:39.180840  287008 ssh_runner.go:195] Run: which crictl
	I0122 21:03:39.185459  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:03:39.226981  287008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:03:39.227067  287008 ssh_runner.go:195] Run: crio --version
	I0122 21:03:39.259480  287008 ssh_runner.go:195] Run: crio --version
	I0122 21:03:39.292366  287008 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0122 21:03:39.293632  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetIP
	I0122 21:03:39.296563  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:39.296915  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:03:39.296937  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:03:39.297217  287008 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0122 21:03:39.301825  287008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:03:39.317069  287008 kubeadm.go:883] updating cluster {Name:test-preload-074508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-074508 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:03:39.317301  287008 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0122 21:03:39.317412  287008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:03:39.363680  287008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0122 21:03:39.363760  287008 ssh_runner.go:195] Run: which lz4
	I0122 21:03:39.368574  287008 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:03:39.373488  287008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:03:39.373530  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0122 21:03:41.187525  287008 crio.go:462] duration metric: took 1.818991645s to copy over tarball
	I0122 21:03:41.187608  287008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:03:43.855221  287008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.667585218s)
	I0122 21:03:43.855253  287008 crio.go:469] duration metric: took 2.667692939s to extract the tarball
	I0122 21:03:43.855260  287008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:03:43.898924  287008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:03:43.953479  287008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0122 21:03:43.953510  287008 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0122 21:03:43.953579  287008 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:03:43.953607  287008 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0122 21:03:43.953635  287008 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0122 21:03:43.953607  287008 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0122 21:03:43.953654  287008 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0122 21:03:43.953674  287008 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 21:03:43.953685  287008 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0122 21:03:43.953690  287008 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0122 21:03:43.955257  287008 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0122 21:03:43.955280  287008 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0122 21:03:43.955300  287008 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0122 21:03:43.955283  287008 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:03:43.955258  287008 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0122 21:03:43.955280  287008 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 21:03:43.955257  287008 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0122 21:03:43.955706  287008 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0122 21:03:44.098836  287008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 21:03:44.100979  287008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0122 21:03:44.108952  287008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0122 21:03:44.112630  287008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0122 21:03:44.132484  287008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0122 21:03:44.135787  287008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0122 21:03:44.178573  287008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0122 21:03:44.206072  287008 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0122 21:03:44.206153  287008 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 21:03:44.206232  287008 ssh_runner.go:195] Run: which crictl
	I0122 21:03:44.220761  287008 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0122 21:03:44.220826  287008 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0122 21:03:44.220900  287008 ssh_runner.go:195] Run: which crictl
	I0122 21:03:44.287470  287008 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0122 21:03:44.287527  287008 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0122 21:03:44.287572  287008 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0122 21:03:44.287619  287008 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0122 21:03:44.287584  287008 ssh_runner.go:195] Run: which crictl
	I0122 21:03:44.287689  287008 ssh_runner.go:195] Run: which crictl
	I0122 21:03:44.292773  287008 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0122 21:03:44.292833  287008 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0122 21:03:44.292910  287008 ssh_runner.go:195] Run: which crictl
	I0122 21:03:44.310091  287008 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0122 21:03:44.310151  287008 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0122 21:03:44.310225  287008 ssh_runner.go:195] Run: which crictl
	I0122 21:03:44.334289  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 21:03:44.334312  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0122 21:03:44.334367  287008 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0122 21:03:44.334403  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0122 21:03:44.334433  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0122 21:03:44.334408  287008 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0122 21:03:44.334486  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0122 21:03:44.334509  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0122 21:03:44.334521  287008 ssh_runner.go:195] Run: which crictl
	I0122 21:03:44.341640  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0122 21:03:44.505192  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0122 21:03:44.522559  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0122 21:03:44.522559  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0122 21:03:44.522641  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0122 21:03:44.522710  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0122 21:03:44.522796  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 21:03:44.539390  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0122 21:03:44.585828  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0122 21:03:44.656019  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0122 21:03:44.720253  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0122 21:03:44.731333  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0122 21:03:44.731420  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 21:03:44.731455  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0122 21:03:44.748503  287008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0122 21:03:44.748544  287008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0122 21:03:44.748642  287008 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0122 21:03:44.793720  287008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0122 21:03:44.793869  287008 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0122 21:03:44.866423  287008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:03:44.894597  287008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0122 21:03:44.894656  287008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0122 21:03:44.894704  287008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0122 21:03:44.894750  287008 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0122 21:03:44.894769  287008 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0122 21:03:44.894773  287008 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0122 21:03:44.894786  287008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0122 21:03:44.894854  287008 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0122 21:03:44.894879  287008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0122 21:03:44.894914  287008 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0122 21:03:44.894922  287008 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0122 21:03:44.894955  287008 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0122 21:03:44.894965  287008 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0122 21:03:44.894978  287008 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0122 21:03:45.061667  287008 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0122 21:03:45.061707  287008 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0122 21:03:45.061772  287008 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0122 21:03:45.061828  287008 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0122 21:03:46.889130  287008 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (1.994142401s)
	I0122 21:03:46.889190  287008 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0122 21:03:46.889219  287008 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0122 21:03:46.889227  287008 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (1.994219467s)
	I0122 21:03:46.889269  287008 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0122 21:03:46.889277  287008 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0122 21:03:47.045373  287008 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0122 21:03:47.045426  287008 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0122 21:03:47.045486  287008 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0122 21:03:49.198523  287008 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.15300471s)
	I0122 21:03:49.198590  287008 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0122 21:03:49.198636  287008 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0122 21:03:49.198714  287008 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0122 21:03:49.657614  287008 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0122 21:03:49.657673  287008 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0122 21:03:49.657729  287008 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0122 21:03:50.518117  287008 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0122 21:03:50.518173  287008 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0122 21:03:50.518260  287008 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0122 21:03:51.271019  287008 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0122 21:03:51.271069  287008 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0122 21:03:51.271118  287008 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0122 21:03:52.027612  287008 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0122 21:03:52.027673  287008 cache_images.go:123] Successfully loaded all cached images
	I0122 21:03:52.027681  287008 cache_images.go:92] duration metric: took 8.074157459s to LoadCachedImages
	I0122 21:03:52.027700  287008 kubeadm.go:934] updating node { 192.168.39.34 8443 v1.24.4 crio true true} ...
	I0122 21:03:52.027886  287008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-074508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-074508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:03:52.027984  287008 ssh_runner.go:195] Run: crio config
	I0122 21:03:52.086425  287008 cni.go:84] Creating CNI manager for ""
	I0122 21:03:52.086453  287008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:03:52.086464  287008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 21:03:52.086491  287008 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.34 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-074508 NodeName:test-preload-074508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:03:52.086684  287008 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-074508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:03:52.086775  287008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0122 21:03:52.098792  287008 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:03:52.098892  287008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:03:52.110480  287008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0122 21:03:52.129879  287008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:03:52.149724  287008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0122 21:03:52.170151  287008 ssh_runner.go:195] Run: grep 192.168.39.34	control-plane.minikube.internal$ /etc/hosts
	I0122 21:03:52.174904  287008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:03:52.190226  287008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:03:52.328572  287008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:03:52.349094  287008 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508 for IP: 192.168.39.34
	I0122 21:03:52.349122  287008 certs.go:194] generating shared ca certs ...
	I0122 21:03:52.349143  287008 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:52.349311  287008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:03:52.349364  287008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:03:52.349375  287008 certs.go:256] generating profile certs ...
	I0122 21:03:52.349472  287008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/client.key
	I0122 21:03:52.349537  287008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/apiserver.key.fe767c9d
	I0122 21:03:52.349585  287008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/proxy-client.key
	I0122 21:03:52.349700  287008 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:03:52.349731  287008 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:03:52.349738  287008 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:03:52.349767  287008 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:03:52.349790  287008 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:03:52.349826  287008 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:03:52.349902  287008 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:03:52.350747  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:03:52.411487  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:03:52.447424  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:03:52.487852  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:03:52.523581  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0122 21:03:52.553239  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:03:52.597665  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:03:52.629621  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:03:52.658687  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:03:52.686606  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:03:52.714907  287008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:03:52.742742  287008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:03:52.762834  287008 ssh_runner.go:195] Run: openssl version
	I0122 21:03:52.769801  287008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:03:52.783632  287008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:03:52.789204  287008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:03:52.789298  287008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:03:52.796393  287008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:03:52.809939  287008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:03:52.823590  287008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:03:52.828880  287008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:03:52.828991  287008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:03:52.835802  287008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:03:52.849550  287008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:03:52.863145  287008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:03:52.868654  287008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:03:52.868745  287008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:03:52.875683  287008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:03:52.888958  287008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:03:52.894484  287008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 21:03:52.901532  287008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 21:03:52.908907  287008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 21:03:52.916243  287008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 21:03:52.923478  287008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 21:03:52.930660  287008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 21:03:52.937836  287008 kubeadm.go:392] StartCluster: {Name:test-preload-074508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-074508 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:03:52.937983  287008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:03:52.938048  287008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:03:52.986007  287008 cri.go:89] found id: ""
	I0122 21:03:52.986141  287008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:03:52.998353  287008 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0122 21:03:52.998381  287008 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0122 21:03:52.998438  287008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0122 21:03:53.010359  287008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0122 21:03:53.010983  287008 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-074508" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:03:53.011155  287008 kubeconfig.go:62] /home/jenkins/minikube-integration/20288-247142/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-074508" cluster setting kubeconfig missing "test-preload-074508" context setting]
	I0122 21:03:53.011679  287008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:53.012596  287008 kapi.go:59] client config for test-preload-074508: &rest.Config{Host:"https://192.168.39.34:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/client.crt", KeyFile:"/home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/client.key", CAFile:"/home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 21:03:53.013592  287008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0122 21:03:53.027312  287008 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.34
	I0122 21:03:53.027360  287008 kubeadm.go:1160] stopping kube-system containers ...
	I0122 21:03:53.027375  287008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0122 21:03:53.027454  287008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:03:53.104121  287008 cri.go:89] found id: ""
	I0122 21:03:53.104204  287008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0122 21:03:53.122165  287008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:03:53.134031  287008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:03:53.134060  287008 kubeadm.go:157] found existing configuration files:
	
	I0122 21:03:53.134119  287008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:03:53.145640  287008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:03:53.145720  287008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:03:53.158213  287008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:03:53.169061  287008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:03:53.169158  287008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:03:53.181143  287008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:03:53.192612  287008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:03:53.192676  287008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:03:53.204734  287008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:03:53.216045  287008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:03:53.216122  287008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:03:53.227849  287008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:03:53.239867  287008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:03:53.338271  287008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:03:54.377382  287008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.039060605s)
	I0122 21:03:54.377418  287008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:03:54.659438  287008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:03:54.744259  287008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:03:54.852210  287008 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:03:54.852325  287008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:03:55.352435  287008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:03:55.853042  287008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:03:55.900040  287008 api_server.go:72] duration metric: took 1.04782783s to wait for apiserver process to appear ...
	I0122 21:03:55.900083  287008 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:03:55.900110  287008 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I0122 21:03:55.900717  287008 api_server.go:269] stopped: https://192.168.39.34:8443/healthz: Get "https://192.168.39.34:8443/healthz": dial tcp 192.168.39.34:8443: connect: connection refused
	I0122 21:03:56.400382  287008 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I0122 21:03:56.401077  287008 api_server.go:269] stopped: https://192.168.39.34:8443/healthz: Get "https://192.168.39.34:8443/healthz": dial tcp 192.168.39.34:8443: connect: connection refused
	I0122 21:03:56.900838  287008 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I0122 21:04:00.508202  287008 api_server.go:279] https://192.168.39.34:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:04:00.508237  287008 api_server.go:103] status: https://192.168.39.34:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:04:00.508254  287008 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I0122 21:04:00.558695  287008 api_server.go:279] https://192.168.39.34:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:04:00.558735  287008 api_server.go:103] status: https://192.168.39.34:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:04:00.900260  287008 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I0122 21:04:00.906823  287008 api_server.go:279] https://192.168.39.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:04:00.906860  287008 api_server.go:103] status: https://192.168.39.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:04:01.400516  287008 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I0122 21:04:01.406850  287008 api_server.go:279] https://192.168.39.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:04:01.406891  287008 api_server.go:103] status: https://192.168.39.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:04:01.900543  287008 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I0122 21:04:01.907514  287008 api_server.go:279] https://192.168.39.34:8443/healthz returned 200:
	ok
	I0122 21:04:01.917327  287008 api_server.go:141] control plane version: v1.24.4
	I0122 21:04:01.917397  287008 api_server.go:131] duration metric: took 6.017287265s to wait for apiserver health ...
	I0122 21:04:01.917412  287008 cni.go:84] Creating CNI manager for ""
	I0122 21:04:01.917422  287008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:04:01.919469  287008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:04:01.920973  287008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:04:01.937349  287008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:04:01.960574  287008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:04:01.960698  287008 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0122 21:04:01.960749  287008 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0122 21:04:01.971771  287008 system_pods.go:59] 7 kube-system pods found
	I0122 21:04:01.971817  287008 system_pods.go:61] "coredns-6d4b75cb6d-qlzdl" [be754e58-5d7e-41e6-b71d-cf6e995d2ac7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:04:01.971825  287008 system_pods.go:61] "etcd-test-preload-074508" [350db204-4f5b-478d-a2e7-93fee183dafa] Running
	I0122 21:04:01.971831  287008 system_pods.go:61] "kube-apiserver-test-preload-074508" [9da4d45e-89d8-49a9-ae2c-9e2707795a78] Running
	I0122 21:04:01.971837  287008 system_pods.go:61] "kube-controller-manager-test-preload-074508" [f430dd5f-68e5-440d-9bfe-a4a0e1614b23] Running
	I0122 21:04:01.971843  287008 system_pods.go:61] "kube-proxy-bvtsh" [1ba8fadb-7715-4ce0-845e-846f13caaf9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0122 21:04:01.971852  287008 system_pods.go:61] "kube-scheduler-test-preload-074508" [370f0d4f-3a46-4325-9b07-1c477ce7542b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:04:01.971868  287008 system_pods.go:61] "storage-provisioner" [ebde412e-558d-4812-aaed-fc8fdd8fa01e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0122 21:04:01.971881  287008 system_pods.go:74] duration metric: took 11.276667ms to wait for pod list to return data ...
	I0122 21:04:01.971892  287008 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:04:01.976279  287008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:04:01.976323  287008 node_conditions.go:123] node cpu capacity is 2
	I0122 21:04:01.976344  287008 node_conditions.go:105] duration metric: took 4.445911ms to run NodePressure ...
	I0122 21:04:01.976370  287008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:04:02.308746  287008 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0122 21:04:02.316749  287008 kubeadm.go:739] kubelet initialised
	I0122 21:04:02.316775  287008 kubeadm.go:740] duration metric: took 7.997665ms waiting for restarted kubelet to initialise ...
	I0122 21:04:02.316787  287008 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:04:02.326030  287008 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-qlzdl" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:02.335386  287008 pod_ready.go:98] node "test-preload-074508" hosting pod "coredns-6d4b75cb6d-qlzdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:02.335416  287008 pod_ready.go:82] duration metric: took 9.322628ms for pod "coredns-6d4b75cb6d-qlzdl" in "kube-system" namespace to be "Ready" ...
	E0122 21:04:02.335427  287008 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-074508" hosting pod "coredns-6d4b75cb6d-qlzdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:02.335435  287008 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:02.344307  287008 pod_ready.go:98] node "test-preload-074508" hosting pod "etcd-test-preload-074508" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:02.344347  287008 pod_ready.go:82] duration metric: took 8.899711ms for pod "etcd-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	E0122 21:04:02.344361  287008 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-074508" hosting pod "etcd-test-preload-074508" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:02.344373  287008 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:02.354126  287008 pod_ready.go:98] node "test-preload-074508" hosting pod "kube-apiserver-test-preload-074508" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:02.354167  287008 pod_ready.go:82] duration metric: took 9.773349ms for pod "kube-apiserver-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	E0122 21:04:02.354201  287008 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-074508" hosting pod "kube-apiserver-test-preload-074508" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:02.354211  287008 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:02.381930  287008 pod_ready.go:98] node "test-preload-074508" hosting pod "kube-controller-manager-test-preload-074508" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:02.381974  287008 pod_ready.go:82] duration metric: took 27.747103ms for pod "kube-controller-manager-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	E0122 21:04:02.381990  287008 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-074508" hosting pod "kube-controller-manager-test-preload-074508" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:02.382001  287008 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bvtsh" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:02.766974  287008 pod_ready.go:98] node "test-preload-074508" hosting pod "kube-proxy-bvtsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:02.767020  287008 pod_ready.go:82] duration metric: took 385.004117ms for pod "kube-proxy-bvtsh" in "kube-system" namespace to be "Ready" ...
	E0122 21:04:02.767036  287008 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-074508" hosting pod "kube-proxy-bvtsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:02.767046  287008 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:03.165114  287008 pod_ready.go:98] node "test-preload-074508" hosting pod "kube-scheduler-test-preload-074508" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:03.165152  287008 pod_ready.go:82] duration metric: took 398.097531ms for pod "kube-scheduler-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	E0122 21:04:03.165164  287008 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-074508" hosting pod "kube-scheduler-test-preload-074508" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:03.165171  287008 pod_ready.go:39] duration metric: took 848.375132ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:04:03.165202  287008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 21:04:03.179823  287008 ops.go:34] apiserver oom_adj: -16
	I0122 21:04:03.179851  287008 kubeadm.go:597] duration metric: took 10.181462727s to restartPrimaryControlPlane
	I0122 21:04:03.179862  287008 kubeadm.go:394] duration metric: took 10.242038108s to StartCluster
	I0122 21:04:03.179883  287008 settings.go:142] acquiring lock: {Name:mkd1753661c2351dd9318eb8eab12d9164b6fe23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:04:03.179990  287008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:04:03.180658  287008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:04:03.180949  287008 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 21:04:03.181091  287008 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0122 21:04:03.181203  287008 config.go:182] Loaded profile config "test-preload-074508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0122 21:04:03.181219  287008 addons.go:69] Setting storage-provisioner=true in profile "test-preload-074508"
	I0122 21:04:03.181245  287008 addons.go:238] Setting addon storage-provisioner=true in "test-preload-074508"
	W0122 21:04:03.181273  287008 addons.go:247] addon storage-provisioner should already be in state true
	I0122 21:04:03.181315  287008 host.go:66] Checking if "test-preload-074508" exists ...
	I0122 21:04:03.181244  287008 addons.go:69] Setting default-storageclass=true in profile "test-preload-074508"
	I0122 21:04:03.181360  287008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-074508"
	I0122 21:04:03.181813  287008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:04:03.181863  287008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:04:03.181912  287008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:04:03.181963  287008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:04:03.182806  287008 out.go:177] * Verifying Kubernetes components...
	I0122 21:04:03.184203  287008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:04:03.199205  287008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36821
	I0122 21:04:03.199379  287008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43301
	I0122 21:04:03.199848  287008 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:04:03.199935  287008 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:04:03.200521  287008 main.go:141] libmachine: Using API Version  1
	I0122 21:04:03.200536  287008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:04:03.200674  287008 main.go:141] libmachine: Using API Version  1
	I0122 21:04:03.200699  287008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:04:03.200899  287008 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:04:03.201137  287008 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:04:03.201310  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetState
	I0122 21:04:03.201514  287008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:04:03.201566  287008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:04:03.203932  287008 kapi.go:59] client config for test-preload-074508: &rest.Config{Host:"https://192.168.39.34:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/client.crt", KeyFile:"/home/jenkins/minikube-integration/20288-247142/.minikube/profiles/test-preload-074508/client.key", CAFile:"/home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 21:04:03.204343  287008 addons.go:238] Setting addon default-storageclass=true in "test-preload-074508"
	W0122 21:04:03.204381  287008 addons.go:247] addon default-storageclass should already be in state true
	I0122 21:04:03.204416  287008 host.go:66] Checking if "test-preload-074508" exists ...
	I0122 21:04:03.204821  287008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:04:03.204881  287008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:04:03.221498  287008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39865
	I0122 21:04:03.221641  287008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0122 21:04:03.221975  287008 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:04:03.222089  287008 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:04:03.222518  287008 main.go:141] libmachine: Using API Version  1
	I0122 21:04:03.222539  287008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:04:03.222655  287008 main.go:141] libmachine: Using API Version  1
	I0122 21:04:03.222678  287008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:04:03.222957  287008 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:04:03.222958  287008 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:04:03.223175  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetState
	I0122 21:04:03.223515  287008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:04:03.223558  287008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:04:03.224841  287008 main.go:141] libmachine: (test-preload-074508) Calling .DriverName
	I0122 21:04:03.227042  287008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:04:03.228601  287008 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:04:03.228626  287008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 21:04:03.228654  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHHostname
	I0122 21:04:03.232183  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:04:03.232559  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:04:03.232595  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:04:03.232916  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHPort
	I0122 21:04:03.233117  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:04:03.233248  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHUsername
	I0122 21:04:03.233371  287008 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/test-preload-074508/id_rsa Username:docker}
	I0122 21:04:03.262431  287008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41639
	I0122 21:04:03.262959  287008 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:04:03.263476  287008 main.go:141] libmachine: Using API Version  1
	I0122 21:04:03.263497  287008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:04:03.263865  287008 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:04:03.264078  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetState
	I0122 21:04:03.266371  287008 main.go:141] libmachine: (test-preload-074508) Calling .DriverName
	I0122 21:04:03.266614  287008 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 21:04:03.266632  287008 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 21:04:03.266657  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHHostname
	I0122 21:04:03.269828  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:04:03.270363  287008 main.go:141] libmachine: (test-preload-074508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:7d:28", ip: ""} in network mk-test-preload-074508: {Iface:virbr1 ExpiryTime:2025-01-22 22:03:31 +0000 UTC Type:0 Mac:52:54:00:4f:7d:28 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:test-preload-074508 Clientid:01:52:54:00:4f:7d:28}
	I0122 21:04:03.270400  287008 main.go:141] libmachine: (test-preload-074508) DBG | domain test-preload-074508 has defined IP address 192.168.39.34 and MAC address 52:54:00:4f:7d:28 in network mk-test-preload-074508
	I0122 21:04:03.270604  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHPort
	I0122 21:04:03.270866  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHKeyPath
	I0122 21:04:03.271057  287008 main.go:141] libmachine: (test-preload-074508) Calling .GetSSHUsername
	I0122 21:04:03.271193  287008 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/test-preload-074508/id_rsa Username:docker}
	I0122 21:04:03.383231  287008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:04:03.404224  287008 node_ready.go:35] waiting up to 6m0s for node "test-preload-074508" to be "Ready" ...
	I0122 21:04:03.492339  287008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:04:03.513902  287008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 21:04:04.587966  287008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095567358s)
	I0122 21:04:04.588039  287008 main.go:141] libmachine: Making call to close driver server
	I0122 21:04:04.588054  287008 main.go:141] libmachine: (test-preload-074508) Calling .Close
	I0122 21:04:04.588047  287008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.074099742s)
	I0122 21:04:04.588100  287008 main.go:141] libmachine: Making call to close driver server
	I0122 21:04:04.588112  287008 main.go:141] libmachine: (test-preload-074508) Calling .Close
	I0122 21:04:04.588369  287008 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:04:04.588389  287008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:04:04.588402  287008 main.go:141] libmachine: Making call to close driver server
	I0122 21:04:04.588410  287008 main.go:141] libmachine: (test-preload-074508) Calling .Close
	I0122 21:04:04.588419  287008 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:04:04.588457  287008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:04:04.588471  287008 main.go:141] libmachine: Making call to close driver server
	I0122 21:04:04.588481  287008 main.go:141] libmachine: (test-preload-074508) Calling .Close
	I0122 21:04:04.588618  287008 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:04:04.588631  287008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:04:04.590150  287008 main.go:141] libmachine: (test-preload-074508) DBG | Closing plugin on server side
	I0122 21:04:04.590176  287008 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:04:04.590206  287008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:04:04.599504  287008 main.go:141] libmachine: Making call to close driver server
	I0122 21:04:04.599535  287008 main.go:141] libmachine: (test-preload-074508) Calling .Close
	I0122 21:04:04.599872  287008 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:04:04.599898  287008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:04:04.599923  287008 main.go:141] libmachine: (test-preload-074508) DBG | Closing plugin on server side
	I0122 21:04:04.602216  287008 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0122 21:04:04.603606  287008 addons.go:514] duration metric: took 1.422521862s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0122 21:04:05.408437  287008 node_ready.go:53] node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:07.408785  287008 node_ready.go:53] node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:09.409471  287008 node_ready.go:53] node "test-preload-074508" has status "Ready":"False"
	I0122 21:04:10.909614  287008 node_ready.go:49] node "test-preload-074508" has status "Ready":"True"
	I0122 21:04:10.909644  287008 node_ready.go:38] duration metric: took 7.505380488s for node "test-preload-074508" to be "Ready" ...
	I0122 21:04:10.909655  287008 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:04:10.916241  287008 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-qlzdl" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:10.925922  287008 pod_ready.go:93] pod "coredns-6d4b75cb6d-qlzdl" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:10.925968  287008 pod_ready.go:82] duration metric: took 9.692945ms for pod "coredns-6d4b75cb6d-qlzdl" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:10.925983  287008 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:12.940980  287008 pod_ready.go:103] pod "etcd-test-preload-074508" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:13.933478  287008 pod_ready.go:93] pod "etcd-test-preload-074508" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:13.933513  287008 pod_ready.go:82] duration metric: took 3.007521655s for pod "etcd-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:13.933525  287008 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:13.939086  287008 pod_ready.go:93] pod "kube-apiserver-test-preload-074508" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:13.939115  287008 pod_ready.go:82] duration metric: took 5.582537ms for pod "kube-apiserver-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:13.939129  287008 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:13.945267  287008 pod_ready.go:93] pod "kube-controller-manager-test-preload-074508" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:13.945297  287008 pod_ready.go:82] duration metric: took 6.158537ms for pod "kube-controller-manager-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:13.945310  287008 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvtsh" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:13.951257  287008 pod_ready.go:93] pod "kube-proxy-bvtsh" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:13.951293  287008 pod_ready.go:82] duration metric: took 5.972726ms for pod "kube-proxy-bvtsh" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:13.951307  287008 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:14.508458  287008 pod_ready.go:93] pod "kube-scheduler-test-preload-074508" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:14.508494  287008 pod_ready.go:82] duration metric: took 557.177117ms for pod "kube-scheduler-test-preload-074508" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:14.508512  287008 pod_ready.go:39] duration metric: took 3.598844806s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:04:14.508535  287008 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:04:14.508619  287008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:04:14.526928  287008 api_server.go:72] duration metric: took 11.34592639s to wait for apiserver process to appear ...
	I0122 21:04:14.526962  287008 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:04:14.526991  287008 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I0122 21:04:14.535605  287008 api_server.go:279] https://192.168.39.34:8443/healthz returned 200:
	ok
	I0122 21:04:14.536821  287008 api_server.go:141] control plane version: v1.24.4
	I0122 21:04:14.536870  287008 api_server.go:131] duration metric: took 9.877911ms to wait for apiserver health ...
	I0122 21:04:14.536879  287008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:04:14.711960  287008 system_pods.go:59] 7 kube-system pods found
	I0122 21:04:14.712003  287008 system_pods.go:61] "coredns-6d4b75cb6d-qlzdl" [be754e58-5d7e-41e6-b71d-cf6e995d2ac7] Running
	I0122 21:04:14.712009  287008 system_pods.go:61] "etcd-test-preload-074508" [350db204-4f5b-478d-a2e7-93fee183dafa] Running
	I0122 21:04:14.712013  287008 system_pods.go:61] "kube-apiserver-test-preload-074508" [9da4d45e-89d8-49a9-ae2c-9e2707795a78] Running
	I0122 21:04:14.712016  287008 system_pods.go:61] "kube-controller-manager-test-preload-074508" [f430dd5f-68e5-440d-9bfe-a4a0e1614b23] Running
	I0122 21:04:14.712019  287008 system_pods.go:61] "kube-proxy-bvtsh" [1ba8fadb-7715-4ce0-845e-846f13caaf9a] Running
	I0122 21:04:14.712023  287008 system_pods.go:61] "kube-scheduler-test-preload-074508" [370f0d4f-3a46-4325-9b07-1c477ce7542b] Running
	I0122 21:04:14.712027  287008 system_pods.go:61] "storage-provisioner" [ebde412e-558d-4812-aaed-fc8fdd8fa01e] Running
	I0122 21:04:14.712034  287008 system_pods.go:74] duration metric: took 175.147674ms to wait for pod list to return data ...
	I0122 21:04:14.712042  287008 default_sa.go:34] waiting for default service account to be created ...
	I0122 21:04:14.909370  287008 default_sa.go:45] found service account: "default"
	I0122 21:04:14.909399  287008 default_sa.go:55] duration metric: took 197.351032ms for default service account to be created ...
	I0122 21:04:14.909409  287008 system_pods.go:137] waiting for k8s-apps to be running ...
	I0122 21:04:15.111094  287008 system_pods.go:87] 7 kube-system pods found
	I0122 21:04:15.310091  287008 system_pods.go:105] "coredns-6d4b75cb6d-qlzdl" [be754e58-5d7e-41e6-b71d-cf6e995d2ac7] Running
	I0122 21:04:15.310115  287008 system_pods.go:105] "etcd-test-preload-074508" [350db204-4f5b-478d-a2e7-93fee183dafa] Running
	I0122 21:04:15.310121  287008 system_pods.go:105] "kube-apiserver-test-preload-074508" [9da4d45e-89d8-49a9-ae2c-9e2707795a78] Running
	I0122 21:04:15.310125  287008 system_pods.go:105] "kube-controller-manager-test-preload-074508" [f430dd5f-68e5-440d-9bfe-a4a0e1614b23] Running
	I0122 21:04:15.310130  287008 system_pods.go:105] "kube-proxy-bvtsh" [1ba8fadb-7715-4ce0-845e-846f13caaf9a] Running
	I0122 21:04:15.310134  287008 system_pods.go:105] "kube-scheduler-test-preload-074508" [370f0d4f-3a46-4325-9b07-1c477ce7542b] Running
	I0122 21:04:15.310138  287008 system_pods.go:105] "storage-provisioner" [ebde412e-558d-4812-aaed-fc8fdd8fa01e] Running
	I0122 21:04:15.310147  287008 system_pods.go:147] duration metric: took 400.730851ms to wait for k8s-apps to be running ...
	I0122 21:04:15.310154  287008 system_svc.go:44] waiting for kubelet service to be running ....
	I0122 21:04:15.310249  287008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:04:15.327140  287008 system_svc.go:56] duration metric: took 16.975314ms WaitForService to wait for kubelet
	I0122 21:04:15.327180  287008 kubeadm.go:582] duration metric: took 12.146187684s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:04:15.327204  287008 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:04:15.508443  287008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:04:15.508474  287008 node_conditions.go:123] node cpu capacity is 2
	I0122 21:04:15.508486  287008 node_conditions.go:105] duration metric: took 181.278386ms to run NodePressure ...
	I0122 21:04:15.508500  287008 start.go:241] waiting for startup goroutines ...
	I0122 21:04:15.508506  287008 start.go:246] waiting for cluster config update ...
	I0122 21:04:15.508518  287008 start.go:255] writing updated cluster config ...
	I0122 21:04:15.508797  287008 ssh_runner.go:195] Run: rm -f paused
	I0122 21:04:15.561984  287008 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0122 21:04:15.563998  287008 out.go:201] 
	W0122 21:04:15.565523  287008 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0122 21:04:15.566919  287008 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0122 21:04:15.568396  287008 out.go:177] * Done! kubectl is now configured to use "test-preload-074508" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.554776143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737579856554750649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61524ac7-dc10-40d0-8ada-d199002166d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.555384869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18f36c32-4b93-44b5-adb1-8efc517efef7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.555438673Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18f36c32-4b93-44b5-adb1-8efc517efef7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.555720961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55c7e5052c6dc052a00d4e15d4b522f8aa35ee34d4fc553eaf8f0a84b94d9322,PodSandboxId:cc62e98cdf4e27573930eac2f560e37d0155c9cc44f4eed78aca1288a3e8e102,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737579848906104804,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qlzdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be754e58-5d7e-41e6-b71d-cf6e995d2ac7,},Annotations:map[string]string{io.kubernetes.container.hash: 93cd37db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9172e6c9b3e66fe93dd4bf83fbeee22f8da2492273202679fb2a019dd649f43,PodSandboxId:8fcf55e7561c17abc7708df31e51e05894da789bbdfdbcc013096c7df7128687,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737579842138289139,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvtsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1ba8fadb-7715-4ce0-845e-846f13caaf9a,},Annotations:map[string]string{io.kubernetes.container.hash: d466d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7662782fddbe1a8cd12a1c1dde1852aa7761708f5e82f92f3f4364019df58dc1,PodSandboxId:d9153f2f9aae2d166959a243e6572609d0285d13857da26ff4199b95a57ae804,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737579841827302313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd
e412e-558d-4812-aaed-fc8fdd8fa01e,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef4d1e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd812d1d4fe3db859c6722848e7e1c38730fa94a166edf990444e974880e45fc,PodSandboxId:162a6bd88578fee092f9f0e81370c99f576db2584400a870312a1aa3e40d8cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737579835709439136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3d8853a68
43c6480296e55b4352710,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411a02f7b3fcfd9466cfead5b10772eed3a6e75134e6120cd02f7dbddab69b4,PodSandboxId:94d7f471138656a018ba7f4cc79b926696b02a2f1070b3aab843fcf5a44d5e8c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737579835625739250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb203f47ba7dc755d98de6
d2be46ff29,},Annotations:map[string]string{io.kubernetes.container.hash: fefd19d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c7548fb9f6cba35d7c64434de3cd5aab2629a673c7d6f7069727793026a3c0,PodSandboxId:48adc24e5df8d75ca7bcb83b1e27221612caff3cc7af03987b5c95e74fdf7662,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737579835600754081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b072f7b0d867a8b1372ffd6dd1ad8f13,},Annotations:map[string]strin
g{io.kubernetes.container.hash: c97928cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06991b008ecc2a0203c0ce9e0da29a7ce9d02e3ec47b4bccac61fd4120f04fdb,PodSandboxId:faa3fadb44fc58a5f8a1233ee6f29b08bd346c1744f637168c4b9546b259b580,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737579835534901388,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b96fdfafc31c2fbd15868746822433,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18f36c32-4b93-44b5-adb1-8efc517efef7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.596394305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5b9e100-5cf9-489b-9404-5debe0954ec0 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.596493840Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5b9e100-5cf9-489b-9404-5debe0954ec0 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.597956512Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a563c18a-5309-431f-b17b-36151adc8d61 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.598388287Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737579856598364616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a563c18a-5309-431f-b17b-36151adc8d61 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.599020980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e30ab269-3ba4-4193-b2a9-b8d7d7341506 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.599076105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e30ab269-3ba4-4193-b2a9-b8d7d7341506 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.599256765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55c7e5052c6dc052a00d4e15d4b522f8aa35ee34d4fc553eaf8f0a84b94d9322,PodSandboxId:cc62e98cdf4e27573930eac2f560e37d0155c9cc44f4eed78aca1288a3e8e102,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737579848906104804,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qlzdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be754e58-5d7e-41e6-b71d-cf6e995d2ac7,},Annotations:map[string]string{io.kubernetes.container.hash: 93cd37db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9172e6c9b3e66fe93dd4bf83fbeee22f8da2492273202679fb2a019dd649f43,PodSandboxId:8fcf55e7561c17abc7708df31e51e05894da789bbdfdbcc013096c7df7128687,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737579842138289139,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvtsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1ba8fadb-7715-4ce0-845e-846f13caaf9a,},Annotations:map[string]string{io.kubernetes.container.hash: d466d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7662782fddbe1a8cd12a1c1dde1852aa7761708f5e82f92f3f4364019df58dc1,PodSandboxId:d9153f2f9aae2d166959a243e6572609d0285d13857da26ff4199b95a57ae804,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737579841827302313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd
e412e-558d-4812-aaed-fc8fdd8fa01e,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef4d1e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd812d1d4fe3db859c6722848e7e1c38730fa94a166edf990444e974880e45fc,PodSandboxId:162a6bd88578fee092f9f0e81370c99f576db2584400a870312a1aa3e40d8cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737579835709439136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3d8853a68
43c6480296e55b4352710,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411a02f7b3fcfd9466cfead5b10772eed3a6e75134e6120cd02f7dbddab69b4,PodSandboxId:94d7f471138656a018ba7f4cc79b926696b02a2f1070b3aab843fcf5a44d5e8c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737579835625739250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb203f47ba7dc755d98de6
d2be46ff29,},Annotations:map[string]string{io.kubernetes.container.hash: fefd19d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c7548fb9f6cba35d7c64434de3cd5aab2629a673c7d6f7069727793026a3c0,PodSandboxId:48adc24e5df8d75ca7bcb83b1e27221612caff3cc7af03987b5c95e74fdf7662,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737579835600754081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b072f7b0d867a8b1372ffd6dd1ad8f13,},Annotations:map[string]strin
g{io.kubernetes.container.hash: c97928cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06991b008ecc2a0203c0ce9e0da29a7ce9d02e3ec47b4bccac61fd4120f04fdb,PodSandboxId:faa3fadb44fc58a5f8a1233ee6f29b08bd346c1744f637168c4b9546b259b580,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737579835534901388,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b96fdfafc31c2fbd15868746822433,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e30ab269-3ba4-4193-b2a9-b8d7d7341506 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.641988612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=688120a7-e814-4dce-9aba-1459dfe7a35c name=/runtime.v1.RuntimeService/Version
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.642128307Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=688120a7-e814-4dce-9aba-1459dfe7a35c name=/runtime.v1.RuntimeService/Version
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.644175156Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a995ccf-383a-4e73-b3b7-6f0194557662 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.644753024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737579856644726459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a995ccf-383a-4e73-b3b7-6f0194557662 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.645389513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a13e7942-a061-476d-847c-71a15bc4d6a2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.645443070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a13e7942-a061-476d-847c-71a15bc4d6a2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.645622078Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55c7e5052c6dc052a00d4e15d4b522f8aa35ee34d4fc553eaf8f0a84b94d9322,PodSandboxId:cc62e98cdf4e27573930eac2f560e37d0155c9cc44f4eed78aca1288a3e8e102,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737579848906104804,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qlzdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be754e58-5d7e-41e6-b71d-cf6e995d2ac7,},Annotations:map[string]string{io.kubernetes.container.hash: 93cd37db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9172e6c9b3e66fe93dd4bf83fbeee22f8da2492273202679fb2a019dd649f43,PodSandboxId:8fcf55e7561c17abc7708df31e51e05894da789bbdfdbcc013096c7df7128687,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737579842138289139,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvtsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1ba8fadb-7715-4ce0-845e-846f13caaf9a,},Annotations:map[string]string{io.kubernetes.container.hash: d466d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7662782fddbe1a8cd12a1c1dde1852aa7761708f5e82f92f3f4364019df58dc1,PodSandboxId:d9153f2f9aae2d166959a243e6572609d0285d13857da26ff4199b95a57ae804,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737579841827302313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd
e412e-558d-4812-aaed-fc8fdd8fa01e,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef4d1e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd812d1d4fe3db859c6722848e7e1c38730fa94a166edf990444e974880e45fc,PodSandboxId:162a6bd88578fee092f9f0e81370c99f576db2584400a870312a1aa3e40d8cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737579835709439136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3d8853a68
43c6480296e55b4352710,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411a02f7b3fcfd9466cfead5b10772eed3a6e75134e6120cd02f7dbddab69b4,PodSandboxId:94d7f471138656a018ba7f4cc79b926696b02a2f1070b3aab843fcf5a44d5e8c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737579835625739250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb203f47ba7dc755d98de6
d2be46ff29,},Annotations:map[string]string{io.kubernetes.container.hash: fefd19d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c7548fb9f6cba35d7c64434de3cd5aab2629a673c7d6f7069727793026a3c0,PodSandboxId:48adc24e5df8d75ca7bcb83b1e27221612caff3cc7af03987b5c95e74fdf7662,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737579835600754081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b072f7b0d867a8b1372ffd6dd1ad8f13,},Annotations:map[string]strin
g{io.kubernetes.container.hash: c97928cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06991b008ecc2a0203c0ce9e0da29a7ce9d02e3ec47b4bccac61fd4120f04fdb,PodSandboxId:faa3fadb44fc58a5f8a1233ee6f29b08bd346c1744f637168c4b9546b259b580,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737579835534901388,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b96fdfafc31c2fbd15868746822433,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a13e7942-a061-476d-847c-71a15bc4d6a2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.681625149Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5957d0a-005e-4431-8d2b-aecd3f238b5c name=/runtime.v1.RuntimeService/Version
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.681789552Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5957d0a-005e-4431-8d2b-aecd3f238b5c name=/runtime.v1.RuntimeService/Version
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.683011875Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3022fca-0818-429a-8e3a-edac9cac7894 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.683484297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737579856683459540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3022fca-0818-429a-8e3a-edac9cac7894 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.684302140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fcc8ace-5347-4adc-9c0c-7bba535c35d5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.684373816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fcc8ace-5347-4adc-9c0c-7bba535c35d5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:04:16 test-preload-074508 crio[666]: time="2025-01-22 21:04:16.684572394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55c7e5052c6dc052a00d4e15d4b522f8aa35ee34d4fc553eaf8f0a84b94d9322,PodSandboxId:cc62e98cdf4e27573930eac2f560e37d0155c9cc44f4eed78aca1288a3e8e102,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737579848906104804,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qlzdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be754e58-5d7e-41e6-b71d-cf6e995d2ac7,},Annotations:map[string]string{io.kubernetes.container.hash: 93cd37db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9172e6c9b3e66fe93dd4bf83fbeee22f8da2492273202679fb2a019dd649f43,PodSandboxId:8fcf55e7561c17abc7708df31e51e05894da789bbdfdbcc013096c7df7128687,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737579842138289139,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvtsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1ba8fadb-7715-4ce0-845e-846f13caaf9a,},Annotations:map[string]string{io.kubernetes.container.hash: d466d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7662782fddbe1a8cd12a1c1dde1852aa7761708f5e82f92f3f4364019df58dc1,PodSandboxId:d9153f2f9aae2d166959a243e6572609d0285d13857da26ff4199b95a57ae804,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737579841827302313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd
e412e-558d-4812-aaed-fc8fdd8fa01e,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef4d1e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd812d1d4fe3db859c6722848e7e1c38730fa94a166edf990444e974880e45fc,PodSandboxId:162a6bd88578fee092f9f0e81370c99f576db2584400a870312a1aa3e40d8cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737579835709439136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3d8853a68
43c6480296e55b4352710,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411a02f7b3fcfd9466cfead5b10772eed3a6e75134e6120cd02f7dbddab69b4,PodSandboxId:94d7f471138656a018ba7f4cc79b926696b02a2f1070b3aab843fcf5a44d5e8c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737579835625739250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb203f47ba7dc755d98de6
d2be46ff29,},Annotations:map[string]string{io.kubernetes.container.hash: fefd19d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c7548fb9f6cba35d7c64434de3cd5aab2629a673c7d6f7069727793026a3c0,PodSandboxId:48adc24e5df8d75ca7bcb83b1e27221612caff3cc7af03987b5c95e74fdf7662,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737579835600754081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b072f7b0d867a8b1372ffd6dd1ad8f13,},Annotations:map[string]strin
g{io.kubernetes.container.hash: c97928cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06991b008ecc2a0203c0ce9e0da29a7ce9d02e3ec47b4bccac61fd4120f04fdb,PodSandboxId:faa3fadb44fc58a5f8a1233ee6f29b08bd346c1744f637168c4b9546b259b580,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737579835534901388,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-074508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b96fdfafc31c2fbd15868746822433,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fcc8ace-5347-4adc-9c0c-7bba535c35d5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55c7e5052c6dc       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   cc62e98cdf4e2       coredns-6d4b75cb6d-qlzdl
	d9172e6c9b3e6       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   8fcf55e7561c1       kube-proxy-bvtsh
	7662782fddbe1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       2                   d9153f2f9aae2       storage-provisioner
	cd812d1d4fe3d       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   162a6bd88578f       kube-scheduler-test-preload-074508
	4411a02f7b3fc       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   94d7f47113865       kube-apiserver-test-preload-074508
	24c7548fb9f6c       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   48adc24e5df8d       etcd-test-preload-074508
	06991b008ecc2       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   faa3fadb44fc5       kube-controller-manager-test-preload-074508
	
	
	==> coredns [55c7e5052c6dc052a00d4e15d4b522f8aa35ee34d4fc553eaf8f0a84b94d9322] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:47872 - 52307 "HINFO IN 7136198444379275126.7759297668602581202. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025341855s
	
	
	==> describe nodes <==
	Name:               test-preload-074508
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-074508
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4
	                    minikube.k8s.io/name=test-preload-074508
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_22T21_00_44_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 Jan 2025 21:00:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-074508
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 Jan 2025 21:04:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 Jan 2025 21:04:10 +0000   Wed, 22 Jan 2025 21:00:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 Jan 2025 21:04:10 +0000   Wed, 22 Jan 2025 21:00:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 Jan 2025 21:04:10 +0000   Wed, 22 Jan 2025 21:00:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 Jan 2025 21:04:10 +0000   Wed, 22 Jan 2025 21:04:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.34
	  Hostname:    test-preload-074508
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c85068e383bb4c7fa5fe431cf20a371a
	  System UUID:                c85068e3-83bb-4c7f-a5fe-431cf20a371a
	  Boot ID:                    9d293ea7-71fc-4891-9761-9ed962843c31
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-qlzdl                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m19s
	  kube-system                 etcd-test-preload-074508                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m32s
	  kube-system                 kube-apiserver-test-preload-074508             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kube-controller-manager-test-preload-074508    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kube-proxy-bvtsh                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	  kube-system                 kube-scheduler-test-preload-074508             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14s                    kube-proxy       
	  Normal  Starting                 3m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m41s (x5 over 3m41s)  kubelet          Node test-preload-074508 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m41s (x5 over 3m41s)  kubelet          Node test-preload-074508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m41s (x5 over 3m41s)  kubelet          Node test-preload-074508 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m32s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m32s                  kubelet          Node test-preload-074508 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m32s                  kubelet          Node test-preload-074508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m32s                  kubelet          Node test-preload-074508 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m22s                  kubelet          Node test-preload-074508 status is now: NodeReady
	  Normal  RegisteredNode           3m20s                  node-controller  Node test-preload-074508 event: Registered Node test-preload-074508 in Controller
	  Normal  Starting                 22s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x8 over 22s)      kubelet          Node test-preload-074508 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 22s)      kubelet          Node test-preload-074508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 22s)      kubelet          Node test-preload-074508 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                     node-controller  Node test-preload-074508 event: Registered Node test-preload-074508 in Controller
	
	
	==> dmesg <==
	[Jan22 21:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054024] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043113] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.171430] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.183053] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.540054] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +4.989726] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.056479] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069513] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.202374] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.141389] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.319632] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +13.257962] systemd-fstab-generator[988]: Ignoring "noauto" option for root device
	[  +0.064622] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.253501] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[Jan22 21:04] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.496297] systemd-fstab-generator[1777]: Ignoring "noauto" option for root device
	[  +5.404553] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [24c7548fb9f6cba35d7c64434de3cd5aab2629a673c7d6f7069727793026a3c0] <==
	{"level":"info","ts":"2025-01-22T21:03:56.014Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"6c39268f2da6496d","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-22T21:03:56.016Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-22T21:03:56.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c39268f2da6496d switched to configuration voters=(7798306626156775789)"}
	{"level":"info","ts":"2025-01-22T21:03:56.016Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c5b11fc56322ab9a","local-member-id":"6c39268f2da6496d","added-peer-id":"6c39268f2da6496d","added-peer-peer-urls":["https://192.168.39.34:2380"]}
	{"level":"info","ts":"2025-01-22T21:03:56.017Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c5b11fc56322ab9a","local-member-id":"6c39268f2da6496d","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-22T21:03:56.017Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-22T21:03:56.021Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-22T21:03:56.023Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.34:2380"}
	{"level":"info","ts":"2025-01-22T21:03:56.023Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.34:2380"}
	{"level":"info","ts":"2025-01-22T21:03:56.024Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6c39268f2da6496d","initial-advertise-peer-urls":["https://192.168.39.34:2380"],"listen-peer-urls":["https://192.168.39.34:2380"],"advertise-client-urls":["https://192.168.39.34:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.34:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-22T21:03:56.024Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-22T21:03:57.887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c39268f2da6496d is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-22T21:03:57.887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c39268f2da6496d became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-22T21:03:57.887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c39268f2da6496d received MsgPreVoteResp from 6c39268f2da6496d at term 2"}
	{"level":"info","ts":"2025-01-22T21:03:57.887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c39268f2da6496d became candidate at term 3"}
	{"level":"info","ts":"2025-01-22T21:03:57.887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c39268f2da6496d received MsgVoteResp from 6c39268f2da6496d at term 3"}
	{"level":"info","ts":"2025-01-22T21:03:57.887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c39268f2da6496d became leader at term 3"}
	{"level":"info","ts":"2025-01-22T21:03:57.887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6c39268f2da6496d elected leader 6c39268f2da6496d at term 3"}
	{"level":"info","ts":"2025-01-22T21:03:57.888Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"6c39268f2da6496d","local-member-attributes":"{Name:test-preload-074508 ClientURLs:[https://192.168.39.34:2379]}","request-path":"/0/members/6c39268f2da6496d/attributes","cluster-id":"c5b11fc56322ab9a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-22T21:03:57.888Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-22T21:03:57.891Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-22T21:03:57.892Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-22T21:03:57.894Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.34:2379"}
	{"level":"info","ts":"2025-01-22T21:03:57.894Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-22T21:03:57.894Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:04:17 up 0 min,  0 users,  load average: 1.56, 0.46, 0.16
	Linux test-preload-074508 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4411a02f7b3fcfd9466cfead5b10772eed3a6e75134e6120cd02f7dbddab69b4] <==
	I0122 21:04:00.491958       1 controller.go:85] Starting OpenAPI V3 controller
	I0122 21:04:00.492012       1 naming_controller.go:291] Starting NamingConditionController
	I0122 21:04:00.492057       1 establishing_controller.go:76] Starting EstablishingController
	I0122 21:04:00.493469       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0122 21:04:00.493612       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0122 21:04:00.493698       1 crd_finalizer.go:266] Starting CRDFinalizer
	E0122 21:04:00.569424       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0122 21:04:00.633925       1 cache.go:39] Caches are synced for autoregister controller
	I0122 21:04:00.634109       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0122 21:04:00.636578       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0122 21:04:00.639274       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0122 21:04:00.641764       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0122 21:04:00.653091       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0122 21:04:00.654430       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0122 21:04:00.659996       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0122 21:04:01.073497       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0122 21:04:01.445941       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0122 21:04:02.135458       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0122 21:04:02.161556       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0122 21:04:02.234465       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0122 21:04:02.272842       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0122 21:04:02.287294       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0122 21:04:02.563579       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0122 21:04:13.312370       1 controller.go:611] quota admission added evaluator for: endpoints
	I0122 21:04:13.317697       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [06991b008ecc2a0203c0ce9e0da29a7ce9d02e3ec47b4bccac61fd4120f04fdb] <==
	I0122 21:04:13.131352       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0122 21:04:13.131619       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0122 21:04:13.131754       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0122 21:04:13.136416       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0122 21:04:13.145565       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0122 21:04:13.149810       1 shared_informer.go:262] Caches are synced for persistent volume
	I0122 21:04:13.171036       1 shared_informer.go:262] Caches are synced for disruption
	I0122 21:04:13.171059       1 disruption.go:371] Sending events to api server.
	I0122 21:04:13.195132       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0122 21:04:13.201712       1 shared_informer.go:262] Caches are synced for attach detach
	I0122 21:04:13.216930       1 shared_informer.go:262] Caches are synced for taint
	I0122 21:04:13.217175       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0122 21:04:13.217305       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-074508. Assuming now as a timestamp.
	I0122 21:04:13.217357       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0122 21:04:13.217375       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0122 21:04:13.217844       1 event.go:294] "Event occurred" object="test-preload-074508" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-074508 event: Registered Node test-preload-074508 in Controller"
	I0122 21:04:13.287128       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0122 21:04:13.288209       1 shared_informer.go:262] Caches are synced for endpoint
	I0122 21:04:13.288962       1 shared_informer.go:262] Caches are synced for daemon sets
	I0122 21:04:13.295764       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0122 21:04:13.311750       1 shared_informer.go:262] Caches are synced for resource quota
	I0122 21:04:13.330806       1 shared_informer.go:262] Caches are synced for resource quota
	I0122 21:04:13.713243       1 shared_informer.go:262] Caches are synced for garbage collector
	I0122 21:04:13.713277       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0122 21:04:13.767769       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [d9172e6c9b3e66fe93dd4bf83fbeee22f8da2492273202679fb2a019dd649f43] <==
	I0122 21:04:02.505830       1 node.go:163] Successfully retrieved node IP: 192.168.39.34
	I0122 21:04:02.505921       1 server_others.go:138] "Detected node IP" address="192.168.39.34"
	I0122 21:04:02.505968       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0122 21:04:02.545236       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0122 21:04:02.545276       1 server_others.go:206] "Using iptables Proxier"
	I0122 21:04:02.546093       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0122 21:04:02.546758       1 server.go:661] "Version info" version="v1.24.4"
	I0122 21:04:02.546791       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0122 21:04:02.548999       1 config.go:317] "Starting service config controller"
	I0122 21:04:02.549068       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0122 21:04:02.549103       1 config.go:226] "Starting endpoint slice config controller"
	I0122 21:04:02.549108       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0122 21:04:02.550118       1 config.go:444] "Starting node config controller"
	I0122 21:04:02.550153       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0122 21:04:02.649737       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0122 21:04:02.649807       1 shared_informer.go:262] Caches are synced for service config
	I0122 21:04:02.655177       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [cd812d1d4fe3db859c6722848e7e1c38730fa94a166edf990444e974880e45fc] <==
	I0122 21:03:56.880484       1 serving.go:348] Generated self-signed cert in-memory
	W0122 21:04:00.541591       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0122 21:04:00.541820       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0122 21:04:00.541834       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0122 21:04:00.541843       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0122 21:04:00.578466       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0122 21:04:00.578505       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0122 21:04:00.580316       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0122 21:04:00.584766       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0122 21:04:00.586757       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0122 21:04:00.595235       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0122 21:04:00.699748       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.615756    1126 setters.go:532] "Node became not ready" node="test-preload-074508" condition={Type:Ready Status:False LastHeartbeatTime:2025-01-22 21:04:00.615603056 +0000 UTC m=+5.964401951 LastTransitionTime:2025-01-22 21:04:00.615603056 +0000 UTC m=+5.964401951 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.784415    1126 apiserver.go:52] "Watching apiserver"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.789231    1126 topology_manager.go:200] "Topology Admit Handler"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.789369    1126 topology_manager.go:200] "Topology Admit Handler"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.789461    1126 topology_manager.go:200] "Topology Admit Handler"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: E0122 21:04:00.790612    1126 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-qlzdl" podUID=be754e58-5d7e-41e6-b71d-cf6e995d2ac7
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.864055    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ba8fadb-7715-4ce0-845e-846f13caaf9a-kube-proxy\") pod \"kube-proxy-bvtsh\" (UID: \"1ba8fadb-7715-4ce0-845e-846f13caaf9a\") " pod="kube-system/kube-proxy-bvtsh"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.864593    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ba8fadb-7715-4ce0-845e-846f13caaf9a-xtables-lock\") pod \"kube-proxy-bvtsh\" (UID: \"1ba8fadb-7715-4ce0-845e-846f13caaf9a\") " pod="kube-system/kube-proxy-bvtsh"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.864841    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ba8fadb-7715-4ce0-845e-846f13caaf9a-lib-modules\") pod \"kube-proxy-bvtsh\" (UID: \"1ba8fadb-7715-4ce0-845e-846f13caaf9a\") " pod="kube-system/kube-proxy-bvtsh"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.865094    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrhs4\" (UniqueName: \"kubernetes.io/projected/be754e58-5d7e-41e6-b71d-cf6e995d2ac7-kube-api-access-zrhs4\") pod \"coredns-6d4b75cb6d-qlzdl\" (UID: \"be754e58-5d7e-41e6-b71d-cf6e995d2ac7\") " pod="kube-system/coredns-6d4b75cb6d-qlzdl"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.865329    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ebde412e-558d-4812-aaed-fc8fdd8fa01e-tmp\") pod \"storage-provisioner\" (UID: \"ebde412e-558d-4812-aaed-fc8fdd8fa01e\") " pod="kube-system/storage-provisioner"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.865476    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29dz8\" (UniqueName: \"kubernetes.io/projected/1ba8fadb-7715-4ce0-845e-846f13caaf9a-kube-api-access-29dz8\") pod \"kube-proxy-bvtsh\" (UID: \"1ba8fadb-7715-4ce0-845e-846f13caaf9a\") " pod="kube-system/kube-proxy-bvtsh"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.865768    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbxhr\" (UniqueName: \"kubernetes.io/projected/ebde412e-558d-4812-aaed-fc8fdd8fa01e-kube-api-access-wbxhr\") pod \"storage-provisioner\" (UID: \"ebde412e-558d-4812-aaed-fc8fdd8fa01e\") " pod="kube-system/storage-provisioner"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.865976    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be754e58-5d7e-41e6-b71d-cf6e995d2ac7-config-volume\") pod \"coredns-6d4b75cb6d-qlzdl\" (UID: \"be754e58-5d7e-41e6-b71d-cf6e995d2ac7\") " pod="kube-system/coredns-6d4b75cb6d-qlzdl"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: I0122 21:04:00.866112    1126 reconciler.go:159] "Reconciler: start to sync state"
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: E0122 21:04:00.969418    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 22 21:04:00 test-preload-074508 kubelet[1126]: E0122 21:04:00.969886    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/be754e58-5d7e-41e6-b71d-cf6e995d2ac7-config-volume podName:be754e58-5d7e-41e6-b71d-cf6e995d2ac7 nodeName:}" failed. No retries permitted until 2025-01-22 21:04:01.469837469 +0000 UTC m=+6.818636367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/be754e58-5d7e-41e6-b71d-cf6e995d2ac7-config-volume") pod "coredns-6d4b75cb6d-qlzdl" (UID: "be754e58-5d7e-41e6-b71d-cf6e995d2ac7") : object "kube-system"/"coredns" not registered
	Jan 22 21:04:01 test-preload-074508 kubelet[1126]: E0122 21:04:01.472472    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 22 21:04:01 test-preload-074508 kubelet[1126]: E0122 21:04:01.472572    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/be754e58-5d7e-41e6-b71d-cf6e995d2ac7-config-volume podName:be754e58-5d7e-41e6-b71d-cf6e995d2ac7 nodeName:}" failed. No retries permitted until 2025-01-22 21:04:02.472555045 +0000 UTC m=+7.821353939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/be754e58-5d7e-41e6-b71d-cf6e995d2ac7-config-volume") pod "coredns-6d4b75cb6d-qlzdl" (UID: "be754e58-5d7e-41e6-b71d-cf6e995d2ac7") : object "kube-system"/"coredns" not registered
	Jan 22 21:04:01 test-preload-074508 kubelet[1126]: E0122 21:04:01.937153    1126 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-qlzdl" podUID=be754e58-5d7e-41e6-b71d-cf6e995d2ac7
	Jan 22 21:04:02 test-preload-074508 kubelet[1126]: E0122 21:04:02.480263    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 22 21:04:02 test-preload-074508 kubelet[1126]: E0122 21:04:02.480347    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/be754e58-5d7e-41e6-b71d-cf6e995d2ac7-config-volume podName:be754e58-5d7e-41e6-b71d-cf6e995d2ac7 nodeName:}" failed. No retries permitted until 2025-01-22 21:04:04.480326117 +0000 UTC m=+9.829125023 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/be754e58-5d7e-41e6-b71d-cf6e995d2ac7-config-volume") pod "coredns-6d4b75cb6d-qlzdl" (UID: "be754e58-5d7e-41e6-b71d-cf6e995d2ac7") : object "kube-system"/"coredns" not registered
	Jan 22 21:04:03 test-preload-074508 kubelet[1126]: E0122 21:04:03.936754    1126 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-qlzdl" podUID=be754e58-5d7e-41e6-b71d-cf6e995d2ac7
	Jan 22 21:04:04 test-preload-074508 kubelet[1126]: E0122 21:04:04.495539    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 22 21:04:04 test-preload-074508 kubelet[1126]: E0122 21:04:04.496401    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/be754e58-5d7e-41e6-b71d-cf6e995d2ac7-config-volume podName:be754e58-5d7e-41e6-b71d-cf6e995d2ac7 nodeName:}" failed. No retries permitted until 2025-01-22 21:04:08.496352087 +0000 UTC m=+13.845150982 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/be754e58-5d7e-41e6-b71d-cf6e995d2ac7-config-volume") pod "coredns-6d4b75cb6d-qlzdl" (UID: "be754e58-5d7e-41e6-b71d-cf6e995d2ac7") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [7662782fddbe1a8cd12a1c1dde1852aa7761708f5e82f92f3f4364019df58dc1] <==
	I0122 21:04:01.923908       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-074508 -n test-preload-074508
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-074508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-074508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-074508
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-074508: (1.014146844s)
--- FAIL: TestPreload (293.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (434.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-168719 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-168719 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m12.932109265s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-168719] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-168719" primary control-plane node in "kubernetes-upgrade-168719" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 21:09:13.319995  293270 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:09:13.320110  293270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:09:13.320116  293270 out.go:358] Setting ErrFile to fd 2...
	I0122 21:09:13.320120  293270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:09:13.320355  293270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:09:13.321056  293270 out.go:352] Setting JSON to false
	I0122 21:09:13.322023  293270 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":13899,"bootTime":1737566254,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:09:13.322150  293270 start.go:139] virtualization: kvm guest
	I0122 21:09:13.324652  293270 out.go:177] * [kubernetes-upgrade-168719] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:09:13.326356  293270 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:09:13.326341  293270 notify.go:220] Checking for updates...
	I0122 21:09:13.329155  293270 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:09:13.330875  293270 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:09:13.332646  293270 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:09:13.334243  293270 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:09:13.335613  293270 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:09:13.337530  293270 config.go:182] Loaded profile config "NoKubernetes-347686": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0122 21:09:13.337674  293270 config.go:182] Loaded profile config "cert-expiration-673511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:09:13.337827  293270 config.go:182] Loaded profile config "running-upgrade-484181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0122 21:09:13.337995  293270 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:09:13.389930  293270 out.go:177] * Using the kvm2 driver based on user configuration
	I0122 21:09:13.391321  293270 start.go:297] selected driver: kvm2
	I0122 21:09:13.391352  293270 start.go:901] validating driver "kvm2" against <nil>
	I0122 21:09:13.391371  293270 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:09:13.392637  293270 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:09:13.392777  293270 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:09:13.411013  293270 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:09:13.411080  293270 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0122 21:09:13.411351  293270 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0122 21:09:13.411387  293270 cni.go:84] Creating CNI manager for ""
	I0122 21:09:13.411495  293270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:09:13.411508  293270 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 21:09:13.411581  293270 start.go:340] cluster config:
	{Name:kubernetes-upgrade-168719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-168719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:09:13.411732  293270 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:09:13.413658  293270 out.go:177] * Starting "kubernetes-upgrade-168719" primary control-plane node in "kubernetes-upgrade-168719" cluster
	I0122 21:09:13.415108  293270 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0122 21:09:13.415191  293270 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0122 21:09:13.415213  293270 cache.go:56] Caching tarball of preloaded images
	I0122 21:09:13.415366  293270 preload.go:172] Found /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0122 21:09:13.415381  293270 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0122 21:09:13.415508  293270 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/config.json ...
	I0122 21:09:13.415537  293270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/config.json: {Name:mk29fc2bde14b60966fc6fe578140dd0df5a58fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:09:13.415726  293270 start.go:360] acquireMachinesLock for kubernetes-upgrade-168719: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:09:50.784308  293270 start.go:364] duration metric: took 37.368549992s to acquireMachinesLock for "kubernetes-upgrade-168719"
	I0122 21:09:50.784391  293270 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-168719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-168719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 21:09:50.784532  293270 start.go:125] createHost starting for "" (driver="kvm2")
	I0122 21:09:50.786632  293270 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0122 21:09:50.786934  293270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:09:50.787015  293270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:09:50.805849  293270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42963
	I0122 21:09:50.806509  293270 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:09:50.807186  293270 main.go:141] libmachine: Using API Version  1
	I0122 21:09:50.807218  293270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:09:50.807661  293270 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:09:50.807906  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetMachineName
	I0122 21:09:50.808064  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:09:50.808259  293270 start.go:159] libmachine.API.Create for "kubernetes-upgrade-168719" (driver="kvm2")
	I0122 21:09:50.808298  293270 client.go:168] LocalClient.Create starting
	I0122 21:09:50.808352  293270 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem
	I0122 21:09:50.808399  293270 main.go:141] libmachine: Decoding PEM data...
	I0122 21:09:50.808417  293270 main.go:141] libmachine: Parsing certificate...
	I0122 21:09:50.808475  293270 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem
	I0122 21:09:50.808493  293270 main.go:141] libmachine: Decoding PEM data...
	I0122 21:09:50.808505  293270 main.go:141] libmachine: Parsing certificate...
	I0122 21:09:50.808521  293270 main.go:141] libmachine: Running pre-create checks...
	I0122 21:09:50.808532  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .PreCreateCheck
	I0122 21:09:50.808934  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetConfigRaw
	I0122 21:09:50.809443  293270 main.go:141] libmachine: Creating machine...
	I0122 21:09:50.809459  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .Create
	I0122 21:09:50.809666  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) creating KVM machine...
	I0122 21:09:50.809689  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) creating network...
	I0122 21:09:50.811354  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found existing default KVM network
	I0122 21:09:50.813031  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:50.812755  293582 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cc:44:96} reservation:<nil>}
	I0122 21:09:50.814390  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:50.814248  293582 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:11:08:ff} reservation:<nil>}
	I0122 21:09:50.815783  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:50.815612  293582 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:9c:1a:61} reservation:<nil>}
	I0122 21:09:50.816970  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:50.816810  293582 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a1a00}
	I0122 21:09:50.817009  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | created network xml: 
	I0122 21:09:50.817022  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | <network>
	I0122 21:09:50.817032  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG |   <name>mk-kubernetes-upgrade-168719</name>
	I0122 21:09:50.817046  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG |   <dns enable='no'/>
	I0122 21:09:50.817056  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG |   
	I0122 21:09:50.817074  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0122 21:09:50.817087  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG |     <dhcp>
	I0122 21:09:50.817134  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0122 21:09:50.817170  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG |     </dhcp>
	I0122 21:09:50.817186  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG |   </ip>
	I0122 21:09:50.817198  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG |   
	I0122 21:09:50.817208  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | </network>
	I0122 21:09:50.817219  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | 
	I0122 21:09:50.823565  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | trying to create private KVM network mk-kubernetes-upgrade-168719 192.168.72.0/24...
	I0122 21:09:50.926569  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | private KVM network mk-kubernetes-upgrade-168719 192.168.72.0/24 created
	I0122 21:09:50.926616  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) setting up store path in /home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719 ...
	I0122 21:09:50.926631  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:50.926572  293582 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:09:50.926646  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) building disk image from file:///home/jenkins/minikube-integration/20288-247142/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0122 21:09:50.926763  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Downloading /home/jenkins/minikube-integration/20288-247142/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20288-247142/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0122 21:09:51.274790  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:51.274613  293582 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/id_rsa...
	I0122 21:09:51.680915  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:51.680758  293582 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/kubernetes-upgrade-168719.rawdisk...
	I0122 21:09:51.680954  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | Writing magic tar header
	I0122 21:09:51.680971  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | Writing SSH key tar header
	I0122 21:09:51.680982  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:51.680930  293582 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719 ...
	I0122 21:09:51.681048  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719
	I0122 21:09:51.681091  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142/.minikube/machines
	I0122 21:09:51.681115  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:09:51.681132  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142
	I0122 21:09:51.681153  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) setting executable bit set on /home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719 (perms=drwx------)
	I0122 21:09:51.681166  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0122 21:09:51.681178  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | checking permissions on dir: /home/jenkins
	I0122 21:09:51.681189  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | checking permissions on dir: /home
	I0122 21:09:51.681199  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | skipping /home - not owner
	I0122 21:09:51.681215  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) setting executable bit set on /home/jenkins/minikube-integration/20288-247142/.minikube/machines (perms=drwxr-xr-x)
	I0122 21:09:51.681231  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) setting executable bit set on /home/jenkins/minikube-integration/20288-247142/.minikube (perms=drwxr-xr-x)
	I0122 21:09:51.681240  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) setting executable bit set on /home/jenkins/minikube-integration/20288-247142 (perms=drwxrwxr-x)
	I0122 21:09:51.681254  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0122 21:09:51.681264  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0122 21:09:51.681275  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) creating domain...
	I0122 21:09:51.682975  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) define libvirt domain using xml: 
	I0122 21:09:51.683014  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) <domain type='kvm'>
	I0122 21:09:51.683027  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)   <name>kubernetes-upgrade-168719</name>
	I0122 21:09:51.683057  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)   <memory unit='MiB'>2200</memory>
	I0122 21:09:51.683073  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)   <vcpu>2</vcpu>
	I0122 21:09:51.683089  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)   <features>
	I0122 21:09:51.683101  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <acpi/>
	I0122 21:09:51.683111  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <apic/>
	I0122 21:09:51.683158  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <pae/>
	I0122 21:09:51.683194  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     
	I0122 21:09:51.683210  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)   </features>
	I0122 21:09:51.683229  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)   <cpu mode='host-passthrough'>
	I0122 21:09:51.683255  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)   
	I0122 21:09:51.683281  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)   </cpu>
	I0122 21:09:51.683303  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)   <os>
	I0122 21:09:51.683315  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <type>hvm</type>
	I0122 21:09:51.683327  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <boot dev='cdrom'/>
	I0122 21:09:51.683354  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <boot dev='hd'/>
	I0122 21:09:51.683367  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <bootmenu enable='no'/>
	I0122 21:09:51.683374  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)   </os>
	I0122 21:09:51.683382  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)   <devices>
	I0122 21:09:51.683395  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <disk type='file' device='cdrom'>
	I0122 21:09:51.683417  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <source file='/home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/boot2docker.iso'/>
	I0122 21:09:51.683431  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <target dev='hdc' bus='scsi'/>
	I0122 21:09:51.683443  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <readonly/>
	I0122 21:09:51.683451  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     </disk>
	I0122 21:09:51.683462  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <disk type='file' device='disk'>
	I0122 21:09:51.683477  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0122 21:09:51.683495  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <source file='/home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/kubernetes-upgrade-168719.rawdisk'/>
	I0122 21:09:51.683504  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <target dev='hda' bus='virtio'/>
	I0122 21:09:51.683512  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     </disk>
	I0122 21:09:51.683519  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <interface type='network'>
	I0122 21:09:51.683529  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <source network='mk-kubernetes-upgrade-168719'/>
	I0122 21:09:51.683541  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <model type='virtio'/>
	I0122 21:09:51.683549  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     </interface>
	I0122 21:09:51.683560  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <interface type='network'>
	I0122 21:09:51.683568  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <source network='default'/>
	I0122 21:09:51.683584  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <model type='virtio'/>
	I0122 21:09:51.683597  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     </interface>
	I0122 21:09:51.683607  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <serial type='pty'>
	I0122 21:09:51.683616  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <target port='0'/>
	I0122 21:09:51.683625  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     </serial>
	I0122 21:09:51.683635  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <console type='pty'>
	I0122 21:09:51.683645  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <target type='serial' port='0'/>
	I0122 21:09:51.683653  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     </console>
	I0122 21:09:51.683660  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     <rng model='virtio'>
	I0122 21:09:51.683674  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)       <backend model='random'>/dev/random</backend>
	I0122 21:09:51.683684  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     </rng>
	I0122 21:09:51.683692  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     
	I0122 21:09:51.683702  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)     
	I0122 21:09:51.683720  293270 main.go:141] libmachine: (kubernetes-upgrade-168719)   </devices>
	I0122 21:09:51.683732  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) </domain>
	I0122 21:09:51.683747  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) 
	I0122 21:09:51.692966  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:20:0d:42 in network default
	I0122 21:09:51.693924  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) starting domain...
	I0122 21:09:51.693965  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) ensuring networks are active...
	I0122 21:09:51.693978  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:09:51.695191  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Ensuring network default is active
	I0122 21:09:51.695706  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Ensuring network mk-kubernetes-upgrade-168719 is active
	I0122 21:09:51.696643  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) getting domain XML...
	I0122 21:09:51.697592  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) creating domain...
	I0122 21:09:53.192394  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) waiting for IP...
	I0122 21:09:53.194148  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:09:53.194176  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:09:53.194214  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:53.193969  293582 retry.go:31] will retry after 204.848575ms: waiting for domain to come up
	I0122 21:09:53.400459  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:09:53.400918  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:09:53.400948  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:53.400911  293582 retry.go:31] will retry after 278.446026ms: waiting for domain to come up
	I0122 21:09:53.681767  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:09:53.682227  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:09:53.682277  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:53.682148  293582 retry.go:31] will retry after 474.134454ms: waiting for domain to come up
	I0122 21:09:54.157832  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:09:54.158456  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:09:54.158488  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:54.158410  293582 retry.go:31] will retry after 527.215474ms: waiting for domain to come up
	I0122 21:09:54.687387  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:09:54.688049  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:09:54.688078  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:54.687950  293582 retry.go:31] will retry after 459.300986ms: waiting for domain to come up
	I0122 21:09:55.148706  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:09:55.149231  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:09:55.149257  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:55.149215  293582 retry.go:31] will retry after 718.457155ms: waiting for domain to come up
	I0122 21:09:55.869748  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:09:55.870363  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:09:55.870398  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:55.870337  293582 retry.go:31] will retry after 1.059979241s: waiting for domain to come up
	I0122 21:09:56.932748  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:09:56.933369  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:09:56.933404  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:56.933336  293582 retry.go:31] will retry after 1.120289523s: waiting for domain to come up
	I0122 21:09:58.055441  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:09:58.055899  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:09:58.055932  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:58.055878  293582 retry.go:31] will retry after 1.36865947s: waiting for domain to come up
	I0122 21:09:59.426309  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:09:59.426872  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:09:59.426904  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:09:59.426840  293582 retry.go:31] will retry after 2.308085019s: waiting for domain to come up
	I0122 21:10:01.737212  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:01.737731  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:10:01.737766  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:10:01.737637  293582 retry.go:31] will retry after 2.6718135s: waiting for domain to come up
	I0122 21:10:04.410814  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:04.411309  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:10:04.411338  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:10:04.411280  293582 retry.go:31] will retry after 2.968070113s: waiting for domain to come up
	I0122 21:10:07.381751  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:07.382214  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:10:07.382243  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:10:07.382172  293582 retry.go:31] will retry after 3.083484824s: waiting for domain to come up
	I0122 21:10:10.469752  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:10.470433  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find current IP address of domain kubernetes-upgrade-168719 in network mk-kubernetes-upgrade-168719
	I0122 21:10:10.470460  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | I0122 21:10:10.470405  293582 retry.go:31] will retry after 3.440286674s: waiting for domain to come up
	I0122 21:10:13.914700  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:13.915310  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) found domain IP: 192.168.72.121
	I0122 21:10:13.915353  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has current primary IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:13.915363  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) reserving static IP address...
	I0122 21:10:13.915873  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-168719", mac: "52:54:00:c1:2b:b3", ip: "192.168.72.121"} in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.019326  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | Getting to WaitForSSH function...
	I0122 21:10:14.019367  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) reserved static IP address 192.168.72.121 for domain kubernetes-upgrade-168719
	I0122 21:10:14.019381  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) waiting for SSH...
	I0122 21:10:14.022722  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.023205  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:14.023244  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.023568  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | Using SSH client type: external
	I0122 21:10:14.023602  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/id_rsa (-rw-------)
	I0122 21:10:14.023645  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:10:14.023663  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | About to run SSH command:
	I0122 21:10:14.023677  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | exit 0
	I0122 21:10:14.158969  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | SSH cmd err, output: <nil>: 
	I0122 21:10:14.159264  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) KVM machine creation complete
	I0122 21:10:14.159631  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetConfigRaw
	I0122 21:10:14.160258  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:10:14.160503  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:10:14.160633  293270 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0122 21:10:14.160658  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetState
	I0122 21:10:14.162296  293270 main.go:141] libmachine: Detecting operating system of created instance...
	I0122 21:10:14.162316  293270 main.go:141] libmachine: Waiting for SSH to be available...
	I0122 21:10:14.162321  293270 main.go:141] libmachine: Getting to WaitForSSH function...
	I0122 21:10:14.162327  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:10:14.164718  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.165160  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:14.165197  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.165347  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:10:14.165578  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:14.165741  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:14.165855  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:10:14.166016  293270 main.go:141] libmachine: Using SSH client type: native
	I0122 21:10:14.166277  293270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.121 22 <nil> <nil>}
	I0122 21:10:14.166291  293270 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0122 21:10:14.285805  293270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:10:14.285842  293270 main.go:141] libmachine: Detecting the provisioner...
	I0122 21:10:14.285853  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:10:14.289150  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.289468  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:14.289506  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.289770  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:10:14.289995  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:14.290205  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:14.290364  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:10:14.290546  293270 main.go:141] libmachine: Using SSH client type: native
	I0122 21:10:14.290739  293270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.121 22 <nil> <nil>}
	I0122 21:10:14.290749  293270 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0122 21:10:14.412265  293270 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0122 21:10:14.412367  293270 main.go:141] libmachine: found compatible host: buildroot
	I0122 21:10:14.412383  293270 main.go:141] libmachine: Provisioning with buildroot...
	I0122 21:10:14.412400  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetMachineName
	I0122 21:10:14.412700  293270 buildroot.go:166] provisioning hostname "kubernetes-upgrade-168719"
	I0122 21:10:14.412732  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetMachineName
	I0122 21:10:14.412938  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:10:14.415869  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.416333  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:14.416379  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.416527  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:10:14.416793  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:14.416985  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:14.417107  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:10:14.417297  293270 main.go:141] libmachine: Using SSH client type: native
	I0122 21:10:14.417483  293270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.121 22 <nil> <nil>}
	I0122 21:10:14.417497  293270 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-168719 && echo "kubernetes-upgrade-168719" | sudo tee /etc/hostname
	I0122 21:10:14.553023  293270 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-168719
	
	I0122 21:10:14.553056  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:10:14.556367  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.556850  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:14.556928  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.557241  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:10:14.557486  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:14.557695  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:14.557860  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:10:14.558106  293270 main.go:141] libmachine: Using SSH client type: native
	I0122 21:10:14.558371  293270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.121 22 <nil> <nil>}
	I0122 21:10:14.558391  293270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-168719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-168719/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-168719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:10:14.693510  293270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:10:14.693547  293270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:10:14.693572  293270 buildroot.go:174] setting up certificates
	I0122 21:10:14.693585  293270 provision.go:84] configureAuth start
	I0122 21:10:14.693595  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetMachineName
	I0122 21:10:14.693869  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetIP
	I0122 21:10:14.696588  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.696965  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:14.696994  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.697216  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:10:14.699606  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.699921  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:14.699954  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:14.700154  293270 provision.go:143] copyHostCerts
	I0122 21:10:14.700231  293270 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:10:14.700259  293270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:10:14.700351  293270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:10:14.700491  293270 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:10:14.700504  293270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:10:14.700537  293270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:10:14.700641  293270 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:10:14.700652  293270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:10:14.700695  293270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:10:14.700810  293270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-168719 san=[127.0.0.1 192.168.72.121 kubernetes-upgrade-168719 localhost minikube]
	I0122 21:10:15.031883  293270 provision.go:177] copyRemoteCerts
	I0122 21:10:15.031960  293270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:10:15.031990  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:10:15.035164  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.035558  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:15.035595  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.035873  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:10:15.036121  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:15.036308  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:10:15.036444  293270 sshutil.go:53] new ssh client: &{IP:192.168.72.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/id_rsa Username:docker}
	I0122 21:10:15.125540  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0122 21:10:15.155310  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0122 21:10:15.184365  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:10:15.214318  293270 provision.go:87] duration metric: took 520.718064ms to configureAuth
	I0122 21:10:15.214352  293270 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:10:15.214519  293270 config.go:182] Loaded profile config "kubernetes-upgrade-168719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0122 21:10:15.214617  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:10:15.218053  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.218562  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:15.218609  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.218810  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:10:15.219033  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:15.219211  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:15.219425  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:10:15.219653  293270 main.go:141] libmachine: Using SSH client type: native
	I0122 21:10:15.219889  293270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.121 22 <nil> <nil>}
	I0122 21:10:15.219906  293270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:10:15.484899  293270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:10:15.484953  293270 main.go:141] libmachine: Checking connection to Docker...
	I0122 21:10:15.484966  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetURL
	I0122 21:10:15.486602  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | using libvirt version 6000000
	I0122 21:10:15.489319  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.489774  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:15.489833  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.489997  293270 main.go:141] libmachine: Docker is up and running!
	I0122 21:10:15.490013  293270 main.go:141] libmachine: Reticulating splines...
	I0122 21:10:15.490022  293270 client.go:171] duration metric: took 24.68171117s to LocalClient.Create
	I0122 21:10:15.490055  293270 start.go:167] duration metric: took 24.681797357s to libmachine.API.Create "kubernetes-upgrade-168719"
	I0122 21:10:15.490068  293270 start.go:293] postStartSetup for "kubernetes-upgrade-168719" (driver="kvm2")
	I0122 21:10:15.490082  293270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:10:15.490107  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:10:15.490440  293270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:10:15.490476  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:10:15.493143  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.493647  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:15.493695  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.493858  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:10:15.494116  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:15.494317  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:10:15.494475  293270 sshutil.go:53] new ssh client: &{IP:192.168.72.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/id_rsa Username:docker}
	I0122 21:10:15.586690  293270 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:10:15.592301  293270 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:10:15.592338  293270 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:10:15.592415  293270 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:10:15.592508  293270 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:10:15.592609  293270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:10:15.604009  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:10:15.635501  293270 start.go:296] duration metric: took 145.406899ms for postStartSetup
	I0122 21:10:15.635585  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetConfigRaw
	I0122 21:10:15.636264  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetIP
	I0122 21:10:15.639497  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.639813  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:15.639850  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.640166  293270 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/config.json ...
	I0122 21:10:15.640416  293270 start.go:128] duration metric: took 24.85586878s to createHost
	I0122 21:10:15.640447  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:10:15.643481  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.643868  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:15.643905  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.644119  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:10:15.644362  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:15.644574  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:15.644805  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:10:15.645024  293270 main.go:141] libmachine: Using SSH client type: native
	I0122 21:10:15.645277  293270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.121 22 <nil> <nil>}
	I0122 21:10:15.645295  293270 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:10:15.767803  293270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737580215.748148487
	
	I0122 21:10:15.767836  293270 fix.go:216] guest clock: 1737580215.748148487
	I0122 21:10:15.767846  293270 fix.go:229] Guest: 2025-01-22 21:10:15.748148487 +0000 UTC Remote: 2025-01-22 21:10:15.64043287 +0000 UTC m=+62.364316434 (delta=107.715617ms)
	I0122 21:10:15.767880  293270 fix.go:200] guest clock delta is within tolerance: 107.715617ms
	I0122 21:10:15.767888  293270 start.go:83] releasing machines lock for "kubernetes-upgrade-168719", held for 24.983537315s
	I0122 21:10:15.767920  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:10:15.768208  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetIP
	I0122 21:10:15.771759  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.772287  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:15.772328  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.772575  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:10:15.773251  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:10:15.773449  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:10:15.773551  293270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:10:15.773612  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:10:15.773746  293270 ssh_runner.go:195] Run: cat /version.json
	I0122 21:10:15.773778  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:10:15.776955  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.776997  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.777403  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:15.777465  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:15.777491  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.777516  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:15.777623  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:10:15.777737  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:10:15.777838  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:15.777903  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:10:15.778010  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:10:15.778082  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:10:15.778178  293270 sshutil.go:53] new ssh client: &{IP:192.168.72.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/id_rsa Username:docker}
	I0122 21:10:15.778259  293270 sshutil.go:53] new ssh client: &{IP:192.168.72.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/id_rsa Username:docker}
	I0122 21:10:15.893735  293270 ssh_runner.go:195] Run: systemctl --version
	I0122 21:10:15.900871  293270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:10:16.082166  293270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:10:16.089819  293270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:10:16.089912  293270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:10:16.111601  293270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:10:16.111629  293270 start.go:495] detecting cgroup driver to use...
	I0122 21:10:16.111696  293270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:10:16.131530  293270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:10:16.148804  293270 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:10:16.148936  293270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:10:16.166741  293270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:10:16.187449  293270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:10:16.324722  293270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:10:16.509019  293270 docker.go:233] disabling docker service ...
	I0122 21:10:16.509118  293270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:10:16.526420  293270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:10:16.542978  293270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:10:16.681853  293270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:10:16.810532  293270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:10:16.829095  293270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:10:16.852571  293270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0122 21:10:16.852652  293270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:10:16.867421  293270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:10:16.867523  293270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:10:16.882540  293270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:10:16.897198  293270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:10:16.917345  293270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:10:16.936330  293270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:10:16.951096  293270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:10:16.951207  293270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:10:16.967814  293270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:10:16.982067  293270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:10:17.118912  293270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:10:17.224000  293270 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:10:17.224101  293270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:10:17.230124  293270 start.go:563] Will wait 60s for crictl version
	I0122 21:10:17.230230  293270 ssh_runner.go:195] Run: which crictl
	I0122 21:10:17.234992  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:10:17.287639  293270 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:10:17.287747  293270 ssh_runner.go:195] Run: crio --version
	I0122 21:10:17.320833  293270 ssh_runner.go:195] Run: crio --version
	I0122 21:10:17.362604  293270 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0122 21:10:17.364255  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetIP
	I0122 21:10:17.367167  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:17.367627  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:10:07 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:10:17.367666  293270 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:10:17.368041  293270 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0122 21:10:17.374517  293270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:10:17.393228  293270 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-168719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-168719 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.121 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:10:17.393361  293270 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0122 21:10:17.393429  293270 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:10:17.435108  293270 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0122 21:10:17.435210  293270 ssh_runner.go:195] Run: which lz4
	I0122 21:10:17.440630  293270 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:10:17.445844  293270 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:10:17.445897  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0122 21:10:19.445466  293270 crio.go:462] duration metric: took 2.004881622s to copy over tarball
	I0122 21:10:19.445569  293270 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:10:22.464239  293270 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.018607696s)
	I0122 21:10:22.464281  293270 crio.go:469] duration metric: took 3.018772783s to extract the tarball
	I0122 21:10:22.464293  293270 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:10:22.511371  293270 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:10:22.570611  293270 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0122 21:10:22.570655  293270 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0122 21:10:22.570734  293270 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:10:22.570795  293270 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0122 21:10:22.570824  293270 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:10:22.570802  293270 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0122 21:10:22.570860  293270 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:10:22.570760  293270 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0122 21:10:22.571094  293270 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:10:22.570747  293270 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:10:22.572204  293270 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0122 21:10:22.572217  293270 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0122 21:10:22.572259  293270 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:10:22.572227  293270 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:10:22.572458  293270 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:10:22.572519  293270 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:10:22.572607  293270 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0122 21:10:22.572534  293270 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:10:22.718971  293270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0122 21:10:22.734059  293270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:10:22.750505  293270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:10:22.753527  293270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:10:22.758247  293270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:10:22.761902  293270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0122 21:10:22.799159  293270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0122 21:10:22.852728  293270 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0122 21:10:22.852808  293270 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0122 21:10:22.852870  293270 ssh_runner.go:195] Run: which crictl
	I0122 21:10:22.866910  293270 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0122 21:10:22.866978  293270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:10:22.867021  293270 ssh_runner.go:195] Run: which crictl
	I0122 21:10:22.942732  293270 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0122 21:10:22.942782  293270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:10:22.942794  293270 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0122 21:10:22.942827  293270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:10:22.942839  293270 ssh_runner.go:195] Run: which crictl
	I0122 21:10:22.942868  293270 ssh_runner.go:195] Run: which crictl
	I0122 21:10:22.965061  293270 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0122 21:10:22.965113  293270 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0122 21:10:22.965157  293270 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0122 21:10:22.965202  293270 ssh_runner.go:195] Run: which crictl
	I0122 21:10:22.965127  293270 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0122 21:10:22.965224  293270 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0122 21:10:22.965202  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0122 21:10:22.965212  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:10:22.965264  293270 ssh_runner.go:195] Run: which crictl
	I0122 21:10:22.965293  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:10:22.965288  293270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:10:22.965293  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:10:22.965332  293270 ssh_runner.go:195] Run: which crictl
	I0122 21:10:23.084969  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:10:23.085057  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0122 21:10:23.085086  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:10:23.085118  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0122 21:10:23.085147  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0122 21:10:23.085250  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:10:23.085265  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:10:23.274741  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0122 21:10:23.274824  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:10:23.274889  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:10:23.274944  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:10:23.277040  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0122 21:10:23.409438  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:10:23.409458  293270 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0122 21:10:23.409496  293270 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0122 21:10:23.409576  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:10:23.409586  293270 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0122 21:10:23.409661  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0122 21:10:23.409680  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0122 21:10:23.461528  293270 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0122 21:10:23.502091  293270 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0122 21:10:23.512653  293270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:10:23.521006  293270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0122 21:10:23.521045  293270 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0122 21:10:23.696401  293270 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0122 21:10:23.696498  293270 cache_images.go:92] duration metric: took 1.125823522s to LoadCachedImages
	W0122 21:10:23.696604  293270 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0122 21:10:23.696619  293270 kubeadm.go:934] updating node { 192.168.72.121 8443 v1.20.0 crio true true} ...
	I0122 21:10:23.696763  293270 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-168719 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-168719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:10:23.696851  293270 ssh_runner.go:195] Run: crio config
	I0122 21:10:23.755847  293270 cni.go:84] Creating CNI manager for ""
	I0122 21:10:23.755877  293270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:10:23.755888  293270 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 21:10:23.755908  293270 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.121 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-168719 NodeName:kubernetes-upgrade-168719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0122 21:10:23.756077  293270 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-168719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.121
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.121"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:10:23.756168  293270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0122 21:10:23.767757  293270 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:10:23.767847  293270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:10:23.781823  293270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0122 21:10:23.804782  293270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:10:23.824818  293270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0122 21:10:23.845700  293270 ssh_runner.go:195] Run: grep 192.168.72.121	control-plane.minikube.internal$ /etc/hosts
	I0122 21:10:23.850605  293270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.121	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:10:23.867318  293270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:10:24.050709  293270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:10:24.073944  293270 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719 for IP: 192.168.72.121
	I0122 21:10:24.073972  293270 certs.go:194] generating shared ca certs ...
	I0122 21:10:24.073995  293270 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:10:24.074219  293270 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:10:24.074282  293270 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:10:24.074293  293270 certs.go:256] generating profile certs ...
	I0122 21:10:24.074379  293270 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/client.key
	I0122 21:10:24.074395  293270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/client.crt with IP's: []
	I0122 21:10:24.468183  293270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/client.crt ...
	I0122 21:10:24.468231  293270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/client.crt: {Name:mkaa80feb73433c9c6492cc867e41881990e6ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:10:24.473212  293270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/client.key ...
	I0122 21:10:24.473253  293270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/client.key: {Name:mk280f98422b270895d329f1145144a47e10039d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:10:24.473447  293270 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.key.db907cc9
	I0122 21:10:24.473474  293270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.crt.db907cc9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.121]
	I0122 21:10:24.595262  293270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.crt.db907cc9 ...
	I0122 21:10:24.595298  293270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.crt.db907cc9: {Name:mk6b8a90394660b3c049999ee498ebf124ad49a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:10:24.600285  293270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.key.db907cc9 ...
	I0122 21:10:24.600338  293270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.key.db907cc9: {Name:mk7739dda8553029bb1ecda46b4240112e209963 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:10:24.600510  293270 certs.go:381] copying /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.crt.db907cc9 -> /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.crt
	I0122 21:10:24.600615  293270 certs.go:385] copying /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.key.db907cc9 -> /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.key
	I0122 21:10:24.600734  293270 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/proxy-client.key
	I0122 21:10:24.600764  293270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/proxy-client.crt with IP's: []
	I0122 21:10:24.701520  293270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/proxy-client.crt ...
	I0122 21:10:24.701564  293270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/proxy-client.crt: {Name:mk88093803d4c3de65e46e6b422efc040a0800f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:10:24.701837  293270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/proxy-client.key ...
	I0122 21:10:24.701863  293270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/proxy-client.key: {Name:mk452cabfef14ce3aaa6545519ddae07d5ed2ce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:10:24.702122  293270 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:10:24.702198  293270 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:10:24.702215  293270 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:10:24.702252  293270 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:10:24.702290  293270 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:10:24.702322  293270 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:10:24.702378  293270 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:10:24.703196  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:10:24.749917  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:10:24.789355  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:10:24.826682  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:10:24.866822  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0122 21:10:24.906779  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:10:24.945151  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:10:24.984468  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:10:25.017565  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:10:25.064392  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:10:25.103949  293270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:10:25.152213  293270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:10:25.183438  293270 ssh_runner.go:195] Run: openssl version
	I0122 21:10:25.193149  293270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:10:25.212100  293270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:10:25.219369  293270 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:10:25.219461  293270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:10:25.229803  293270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:10:25.246878  293270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:10:25.267981  293270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:10:25.273883  293270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:10:25.273971  293270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:10:25.281186  293270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:10:25.299718  293270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:10:25.317586  293270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:10:25.325226  293270 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:10:25.325308  293270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:10:25.333550  293270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:10:25.350172  293270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:10:25.356815  293270 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0122 21:10:25.356909  293270 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-168719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-168719 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.121 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:10:25.357031  293270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:10:25.357108  293270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:10:25.420894  293270 cri.go:89] found id: ""
	I0122 21:10:25.420982  293270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:10:25.438164  293270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:10:25.454595  293270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:10:25.469671  293270 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:10:25.469707  293270 kubeadm.go:157] found existing configuration files:
	
	I0122 21:10:25.469779  293270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:10:25.484129  293270 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:10:25.484222  293270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:10:25.512784  293270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:10:25.538985  293270 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:10:25.539081  293270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:10:25.557385  293270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:10:25.589105  293270 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:10:25.589189  293270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:10:25.606505  293270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:10:25.631813  293270 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:10:25.631901  293270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:10:25.646289  293270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:10:26.177217  293270 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:12:25.374398  293270 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:12:25.374639  293270 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:12:25.375671  293270 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:12:25.375779  293270 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:12:25.375889  293270 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:12:25.376020  293270 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:12:25.376182  293270 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:12:25.376295  293270 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:12:25.378303  293270 out.go:235]   - Generating certificates and keys ...
	I0122 21:12:25.378419  293270 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:12:25.378493  293270 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:12:25.378573  293270 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0122 21:12:25.378641  293270 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0122 21:12:25.378717  293270 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0122 21:12:25.378783  293270 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0122 21:12:25.378867  293270 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0122 21:12:25.379040  293270 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-168719 localhost] and IPs [192.168.72.121 127.0.0.1 ::1]
	I0122 21:12:25.379113  293270 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0122 21:12:25.379298  293270 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-168719 localhost] and IPs [192.168.72.121 127.0.0.1 ::1]
	I0122 21:12:25.379574  293270 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0122 21:12:25.379774  293270 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0122 21:12:25.379968  293270 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0122 21:12:25.380129  293270 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:12:25.380271  293270 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:12:25.380410  293270 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:12:25.380509  293270 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:12:25.380597  293270 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:12:25.380725  293270 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:12:25.380813  293270 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:12:25.380854  293270 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:12:25.380923  293270 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:12:25.382800  293270 out.go:235]   - Booting up control plane ...
	I0122 21:12:25.382916  293270 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:12:25.383001  293270 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:12:25.383099  293270 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:12:25.383255  293270 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:12:25.383488  293270 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:12:25.383568  293270 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:12:25.383664  293270 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:12:25.383940  293270 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:12:25.384076  293270 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:12:25.384253  293270 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:12:25.384346  293270 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:12:25.384510  293270 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:12:25.384565  293270 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:12:25.384746  293270 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:12:25.384857  293270 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:12:25.385050  293270 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:12:25.385062  293270 kubeadm.go:310] 
	I0122 21:12:25.385097  293270 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:12:25.385134  293270 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:12:25.385140  293270 kubeadm.go:310] 
	I0122 21:12:25.385196  293270 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:12:25.385244  293270 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:12:25.385335  293270 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:12:25.385342  293270 kubeadm.go:310] 
	I0122 21:12:25.385427  293270 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:12:25.385462  293270 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:12:25.385491  293270 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:12:25.385497  293270 kubeadm.go:310] 
	I0122 21:12:25.385638  293270 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:12:25.385759  293270 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:12:25.385769  293270 kubeadm.go:310] 
	I0122 21:12:25.385912  293270 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:12:25.386036  293270 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:12:25.386115  293270 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:12:25.386197  293270 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:12:25.386310  293270 kubeadm.go:310] 
	W0122 21:12:25.386374  293270 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-168719 localhost] and IPs [192.168.72.121 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-168719 localhost] and IPs [192.168.72.121 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-168719 localhost] and IPs [192.168.72.121 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-168719 localhost] and IPs [192.168.72.121 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0122 21:12:25.386424  293270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:12:28.804218  293270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.417763066s)
	I0122 21:12:28.804311  293270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:12:28.823188  293270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:12:28.836670  293270 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:12:28.836707  293270 kubeadm.go:157] found existing configuration files:
	
	I0122 21:12:28.836778  293270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:12:28.849477  293270 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:12:28.849561  293270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:12:28.863288  293270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:12:28.875287  293270 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:12:28.875405  293270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:12:28.887649  293270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:12:28.899073  293270 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:12:28.899143  293270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:12:28.910888  293270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:12:28.923672  293270 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:12:28.923759  293270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:12:28.938324  293270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:12:29.029044  293270 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:12:29.029132  293270 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:12:29.198289  293270 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:12:29.198496  293270 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:12:29.198657  293270 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:12:29.436264  293270 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:12:29.439188  293270 out.go:235]   - Generating certificates and keys ...
	I0122 21:12:29.439309  293270 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:12:29.439387  293270 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:12:29.439529  293270 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:12:29.439627  293270 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:12:29.439712  293270 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:12:29.439781  293270 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:12:29.439895  293270 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:12:29.439997  293270 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:12:29.443974  293270 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:12:29.444120  293270 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:12:29.444202  293270 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:12:29.444291  293270 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:12:29.614631  293270 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:12:29.695666  293270 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:12:29.837390  293270 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:12:29.969805  293270 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:12:29.988050  293270 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:12:29.989503  293270 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:12:29.989584  293270 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:12:30.166138  293270 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:12:30.168306  293270 out.go:235]   - Booting up control plane ...
	I0122 21:12:30.168469  293270 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:12:30.168957  293270 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:12:30.171364  293270 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:12:30.172926  293270 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:12:30.176113  293270 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:13:10.178510  293270 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:13:10.179117  293270 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:13:10.179348  293270 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:13:15.179895  293270 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:13:15.180155  293270 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:13:25.180895  293270 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:13:25.181216  293270 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:13:45.182441  293270 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:13:45.182660  293270 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:14:25.182973  293270 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:14:25.183211  293270 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:14:25.183226  293270 kubeadm.go:310] 
	I0122 21:14:25.183268  293270 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:14:25.183310  293270 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:14:25.183319  293270 kubeadm.go:310] 
	I0122 21:14:25.183356  293270 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:14:25.183391  293270 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:14:25.183493  293270 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:14:25.183499  293270 kubeadm.go:310] 
	I0122 21:14:25.183598  293270 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:14:25.183633  293270 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:14:25.183665  293270 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:14:25.183671  293270 kubeadm.go:310] 
	I0122 21:14:25.183788  293270 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:14:25.183892  293270 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:14:25.183900  293270 kubeadm.go:310] 
	I0122 21:14:25.184056  293270 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:14:25.184170  293270 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:14:25.184268  293270 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:14:25.184361  293270 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:14:25.184368  293270 kubeadm.go:310] 
	I0122 21:14:25.186971  293270 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:14:25.187108  293270 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:14:25.187193  293270 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:14:25.187398  293270 kubeadm.go:394] duration metric: took 3m59.830491451s to StartCluster
	I0122 21:14:25.187471  293270 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:14:25.187554  293270 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:14:25.248062  293270 cri.go:89] found id: ""
	I0122 21:14:25.248099  293270 logs.go:282] 0 containers: []
	W0122 21:14:25.248111  293270 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:14:25.248121  293270 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:14:25.248198  293270 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:14:25.297676  293270 cri.go:89] found id: ""
	I0122 21:14:25.297714  293270 logs.go:282] 0 containers: []
	W0122 21:14:25.297726  293270 logs.go:284] No container was found matching "etcd"
	I0122 21:14:25.297734  293270 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:14:25.297808  293270 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:14:25.349624  293270 cri.go:89] found id: ""
	I0122 21:14:25.349677  293270 logs.go:282] 0 containers: []
	W0122 21:14:25.349690  293270 logs.go:284] No container was found matching "coredns"
	I0122 21:14:25.349711  293270 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:14:25.349781  293270 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:14:25.398494  293270 cri.go:89] found id: ""
	I0122 21:14:25.398534  293270 logs.go:282] 0 containers: []
	W0122 21:14:25.398546  293270 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:14:25.398555  293270 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:14:25.398632  293270 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:14:25.456006  293270 cri.go:89] found id: ""
	I0122 21:14:25.456050  293270 logs.go:282] 0 containers: []
	W0122 21:14:25.456078  293270 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:14:25.456087  293270 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:14:25.456173  293270 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:14:25.513427  293270 cri.go:89] found id: ""
	I0122 21:14:25.513467  293270 logs.go:282] 0 containers: []
	W0122 21:14:25.513476  293270 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:14:25.513485  293270 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:14:25.513568  293270 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:14:25.571938  293270 cri.go:89] found id: ""
	I0122 21:14:25.572063  293270 logs.go:282] 0 containers: []
	W0122 21:14:25.572090  293270 logs.go:284] No container was found matching "kindnet"
	I0122 21:14:25.572131  293270 logs.go:123] Gathering logs for kubelet ...
	I0122 21:14:25.572167  293270 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:14:25.637881  293270 logs.go:123] Gathering logs for dmesg ...
	I0122 21:14:25.637948  293270 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:14:25.657337  293270 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:14:25.657441  293270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:14:25.848117  293270 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:14:25.848143  293270 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:14:25.848157  293270 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:14:26.035541  293270 logs.go:123] Gathering logs for container status ...
	I0122 21:14:26.035668  293270 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0122 21:14:26.111745  293270 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0122 21:14:26.111836  293270 out.go:270] * 
	* 
	W0122 21:14:26.111917  293270 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:14:26.111942  293270 out.go:270] * 
	* 
	W0122 21:14:26.113162  293270 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 21:14:26.168671  293270 out.go:201] 
	W0122 21:14:26.172573  293270 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:14:26.172677  293270 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0122 21:14:26.172705  293270 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0122 21:14:26.177042  293270 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-168719 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-168719
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-168719: (3.424857831s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-168719 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-168719 status --format={{.Host}}: exit status 7 (88.050188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-168719 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-168719 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.16708079s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-168719 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-168719 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-168719 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (128.186405ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-168719] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-168719
	    minikube start -p kubernetes-upgrade-168719 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1687192 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-168719 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-168719 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-168719 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.99976116s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-01-22 21:16:21.17156461 +0000 UTC m=+4456.569388681
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-168719 -n kubernetes-upgrade-168719
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-168719 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-168719 logs -n 25: (4.266852552s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887                             | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887                             | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887                             | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887                             | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887                             | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo cat                    | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo cat                    | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887                             | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo cat                    | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887                             | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-804887 sudo                        | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-804887                             | custom-flannel-804887 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	| start   | -p bridge-804887 --memory=3072                       | bridge-804887         | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 21:15:40
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 21:15:40.869770  303786 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:15:40.869939  303786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:15:40.869948  303786 out.go:358] Setting ErrFile to fd 2...
	I0122 21:15:40.869953  303786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:15:40.870145  303786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:15:40.870876  303786 out.go:352] Setting JSON to false
	I0122 21:15:40.872092  303786 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":14287,"bootTime":1737566254,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:15:40.872247  303786 start.go:139] virtualization: kvm guest
	I0122 21:15:40.874360  303786 out.go:177] * [bridge-804887] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:15:40.875895  303786 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:15:40.875880  303786 notify.go:220] Checking for updates...
	I0122 21:15:40.877343  303786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:15:40.878787  303786 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:15:40.880117  303786 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:15:40.881662  303786 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:15:40.883298  303786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:15:40.885157  303786 config.go:182] Loaded profile config "enable-default-cni-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:15:40.885274  303786 config.go:182] Loaded profile config "flannel-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:15:40.885351  303786 config.go:182] Loaded profile config "kubernetes-upgrade-168719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:15:40.885488  303786 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:15:40.927004  303786 out.go:177] * Using the kvm2 driver based on user configuration
	I0122 21:15:40.928303  303786 start.go:297] selected driver: kvm2
	I0122 21:15:40.928322  303786 start.go:901] validating driver "kvm2" against <nil>
	I0122 21:15:40.928336  303786 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:15:40.929108  303786 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:15:40.929201  303786 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:15:40.949347  303786 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:15:40.949420  303786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0122 21:15:40.949806  303786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:15:40.949845  303786 cni.go:84] Creating CNI manager for "bridge"
	I0122 21:15:40.949852  303786 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 21:15:40.949915  303786 start.go:340] cluster config:
	{Name:bridge-804887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-804887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0122 21:15:40.950021  303786 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:15:40.952673  303786 out.go:177] * Starting "bridge-804887" primary control-plane node in "bridge-804887" cluster
	I0122 21:15:36.925912  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:36.926576  300719 main.go:141] libmachine: (enable-default-cni-804887) found domain IP: 192.168.39.32
	I0122 21:15:36.926608  300719 main.go:141] libmachine: (enable-default-cni-804887) reserving static IP address...
	I0122 21:15:36.926626  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has current primary IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:36.926973  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-804887", mac: "52:54:00:54:87:43", ip: "192.168.39.32"} in network mk-enable-default-cni-804887
	I0122 21:15:37.034078  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | Getting to WaitForSSH function...
	I0122 21:15:37.034135  300719 main.go:141] libmachine: (enable-default-cni-804887) reserved static IP address 192.168.39.32 for domain enable-default-cni-804887
	I0122 21:15:37.034150  300719 main.go:141] libmachine: (enable-default-cni-804887) waiting for SSH...
	I0122 21:15:37.037743  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:37.038207  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887
	I0122 21:15:37.038241  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | unable to find defined IP address of network mk-enable-default-cni-804887 interface with MAC address 52:54:00:54:87:43
	I0122 21:15:37.038383  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | Using SSH client type: external
	I0122 21:15:37.038427  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/enable-default-cni-804887/id_rsa (-rw-------)
	I0122 21:15:37.038461  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/enable-default-cni-804887/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:15:37.038471  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | About to run SSH command:
	I0122 21:15:37.038485  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | exit 0
	I0122 21:15:37.042952  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | SSH cmd err, output: exit status 255: 
	I0122 21:15:37.042994  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0122 21:15:37.043007  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | command : exit 0
	I0122 21:15:37.043015  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | err     : exit status 255
	I0122 21:15:37.043025  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | output  : 
	I0122 21:15:40.043889  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | Getting to WaitForSSH function...
	I0122 21:15:40.584740  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:40.585399  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:40.585428  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:40.585589  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | Using SSH client type: external
	I0122 21:15:40.585619  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/enable-default-cni-804887/id_rsa (-rw-------)
	I0122 21:15:40.585671  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/enable-default-cni-804887/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:15:40.585691  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | About to run SSH command:
	I0122 21:15:40.585708  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | exit 0
	I0122 21:15:40.714942  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | SSH cmd err, output: <nil>: 
	I0122 21:15:40.715284  300719 main.go:141] libmachine: (enable-default-cni-804887) KVM machine creation complete
	I0122 21:15:40.715662  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetConfigRaw
	I0122 21:15:40.716334  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .DriverName
	I0122 21:15:40.716560  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .DriverName
	I0122 21:15:40.716701  300719 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0122 21:15:40.716716  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetState
	I0122 21:15:40.718298  300719 main.go:141] libmachine: Detecting operating system of created instance...
	I0122 21:15:40.718319  300719 main.go:141] libmachine: Waiting for SSH to be available...
	I0122 21:15:40.718327  300719 main.go:141] libmachine: Getting to WaitForSSH function...
	I0122 21:15:40.718336  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHHostname
	I0122 21:15:40.721176  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:40.721582  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:40.721612  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:40.721778  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHPort
	I0122 21:15:40.722003  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:40.722159  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:40.722321  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHUsername
	I0122 21:15:40.722490  300719 main.go:141] libmachine: Using SSH client type: native
	I0122 21:15:40.722741  300719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0122 21:15:40.722756  300719 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0122 21:15:40.827117  300719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:15:40.827149  300719 main.go:141] libmachine: Detecting the provisioner...
	I0122 21:15:40.827158  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHHostname
	I0122 21:15:40.830167  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:40.830575  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:40.830604  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:40.830797  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHPort
	I0122 21:15:40.831099  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:40.831294  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:40.831436  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHUsername
	I0122 21:15:40.831630  300719 main.go:141] libmachine: Using SSH client type: native
	I0122 21:15:40.832026  300719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0122 21:15:40.832048  300719 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0122 21:15:40.945064  300719 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0122 21:15:40.945172  300719 main.go:141] libmachine: found compatible host: buildroot
	I0122 21:15:40.945182  300719 main.go:141] libmachine: Provisioning with buildroot...
	I0122 21:15:40.945194  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetMachineName
	I0122 21:15:40.945512  300719 buildroot.go:166] provisioning hostname "enable-default-cni-804887"
	I0122 21:15:40.945541  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetMachineName
	I0122 21:15:40.945757  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHHostname
	I0122 21:15:40.948830  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:40.949218  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:40.949253  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:40.949431  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHPort
	I0122 21:15:40.949663  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:40.950072  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:40.950295  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHUsername
	I0122 21:15:40.950489  300719 main.go:141] libmachine: Using SSH client type: native
	I0122 21:15:40.950670  300719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0122 21:15:40.950682  300719 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-804887 && echo "enable-default-cni-804887" | sudo tee /etc/hostname
	I0122 21:15:42.211942  301555 start.go:364] duration metric: took 23.809750327s to acquireMachinesLock for "kubernetes-upgrade-168719"
	I0122 21:15:42.212004  301555 start.go:96] Skipping create...Using existing machine configuration
	I0122 21:15:42.212036  301555 fix.go:54] fixHost starting: 
	I0122 21:15:42.212527  301555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:15:42.212585  301555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:15:42.231394  301555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39559
	I0122 21:15:42.231901  301555 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:15:42.232443  301555 main.go:141] libmachine: Using API Version  1
	I0122 21:15:42.232472  301555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:15:42.232896  301555 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:15:42.233126  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:15:42.233302  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetState
	I0122 21:15:42.235322  301555 fix.go:112] recreateIfNeeded on kubernetes-upgrade-168719: state=Running err=<nil>
	W0122 21:15:42.235352  301555 fix.go:138] unexpected machine state, will restart: <nil>
	I0122 21:15:42.237265  301555 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-168719" VM ...
	I0122 21:15:42.238688  301555 machine.go:93] provisionDockerMachine start ...
	I0122 21:15:42.238729  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:15:42.239004  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:15:42.242387  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:42.242839  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:42.242878  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:42.243059  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:15:42.243293  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:42.243476  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:42.243646  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:15:42.243843  301555 main.go:141] libmachine: Using SSH client type: native
	I0122 21:15:42.244104  301555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.121 22 <nil> <nil>}
	I0122 21:15:42.244126  301555 main.go:141] libmachine: About to run SSH command:
	hostname
	I0122 21:15:42.363956  301555 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-168719
	
	I0122 21:15:42.363990  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetMachineName
	I0122 21:15:42.364236  301555 buildroot.go:166] provisioning hostname "kubernetes-upgrade-168719"
	I0122 21:15:42.364258  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetMachineName
	I0122 21:15:42.364452  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:15:42.367448  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:42.367880  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:42.367916  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:42.368138  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:15:42.368349  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:42.368499  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:42.368710  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:15:42.368894  301555 main.go:141] libmachine: Using SSH client type: native
	I0122 21:15:42.369170  301555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.121 22 <nil> <nil>}
	I0122 21:15:42.369199  301555 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-168719 && echo "kubernetes-upgrade-168719" | sudo tee /etc/hostname
	I0122 21:15:42.524651  301555 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-168719
	
	I0122 21:15:42.524688  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:15:42.527871  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:42.528339  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:42.528382  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:42.528615  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:15:42.528867  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:42.529096  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:42.529299  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:15:42.529485  301555 main.go:141] libmachine: Using SSH client type: native
	I0122 21:15:42.529746  301555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.121 22 <nil> <nil>}
	I0122 21:15:42.529773  301555 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-168719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-168719/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-168719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:15:42.655731  301555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:15:42.655774  301555 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:15:42.655826  301555 buildroot.go:174] setting up certificates
	I0122 21:15:42.655841  301555 provision.go:84] configureAuth start
	I0122 21:15:42.655859  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetMachineName
	I0122 21:15:42.656203  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetIP
	I0122 21:15:42.659458  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:42.659772  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:42.659806  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:42.659933  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:15:42.662288  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:42.662700  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:42.662740  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:42.662892  301555 provision.go:143] copyHostCerts
	I0122 21:15:42.662946  301555 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:15:42.662966  301555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:15:42.663026  301555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:15:42.663131  301555 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:15:42.663141  301555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:15:42.663166  301555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:15:42.663247  301555 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:15:42.663259  301555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:15:42.663282  301555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:15:42.663363  301555 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-168719 san=[127.0.0.1 192.168.72.121 kubernetes-upgrade-168719 localhost minikube]
	I0122 21:15:42.906564  301555 provision.go:177] copyRemoteCerts
	I0122 21:15:42.906638  301555 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:15:42.906667  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:15:42.909906  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:42.910242  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:42.910269  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:42.910512  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:15:42.910767  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:42.910942  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:15:42.911132  301555 sshutil.go:53] new ssh client: &{IP:192.168.72.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/id_rsa Username:docker}
	I0122 21:15:43.002995  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:15:43.038849  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0122 21:15:43.079908  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 21:15:43.113320  301555 provision.go:87] duration metric: took 457.455181ms to configureAuth
	I0122 21:15:43.113370  301555 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:15:43.113629  301555 config.go:182] Loaded profile config "kubernetes-upgrade-168719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:15:43.113743  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:15:43.117136  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:43.117472  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:43.117510  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:43.117721  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:15:43.117985  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:43.118164  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:43.118358  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:15:43.118518  301555 main.go:141] libmachine: Using SSH client type: native
	I0122 21:15:43.118721  301555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.121 22 <nil> <nil>}
	I0122 21:15:43.118739  301555 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:15:41.067045  300719 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-804887
	
	I0122 21:15:41.067079  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHHostname
	I0122 21:15:41.070000  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.070435  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:41.070488  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.070643  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHPort
	I0122 21:15:41.070827  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:41.070990  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:41.071147  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHUsername
	I0122 21:15:41.071333  300719 main.go:141] libmachine: Using SSH client type: native
	I0122 21:15:41.071608  300719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0122 21:15:41.071639  300719 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-804887' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-804887/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-804887' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:15:41.180321  300719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:15:41.180364  300719 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:15:41.180406  300719 buildroot.go:174] setting up certificates
	I0122 21:15:41.180426  300719 provision.go:84] configureAuth start
	I0122 21:15:41.180452  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetMachineName
	I0122 21:15:41.180778  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetIP
	I0122 21:15:41.183725  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.184059  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:41.184097  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.184305  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHHostname
	I0122 21:15:41.186793  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.187262  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:41.187306  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.187475  300719 provision.go:143] copyHostCerts
	I0122 21:15:41.187605  300719 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:15:41.187630  300719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:15:41.187736  300719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:15:41.187867  300719 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:15:41.187880  300719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:15:41.187920  300719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:15:41.188015  300719 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:15:41.188027  300719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:15:41.188063  300719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:15:41.188149  300719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-804887 san=[127.0.0.1 192.168.39.32 enable-default-cni-804887 localhost minikube]
	I0122 21:15:41.342540  300719 provision.go:177] copyRemoteCerts
	I0122 21:15:41.342626  300719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:15:41.342661  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHHostname
	I0122 21:15:41.346070  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.346499  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:41.346531  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.346745  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHPort
	I0122 21:15:41.346988  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:41.347130  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHUsername
	I0122 21:15:41.347342  300719 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/enable-default-cni-804887/id_rsa Username:docker}
	I0122 21:15:41.429495  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:15:41.458970  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0122 21:15:41.488602  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 21:15:41.519636  300719 provision.go:87] duration metric: took 339.186088ms to configureAuth
	I0122 21:15:41.519669  300719 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:15:41.519883  300719 config.go:182] Loaded profile config "enable-default-cni-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:15:41.519971  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHHostname
	I0122 21:15:41.522835  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.523133  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:41.523168  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.523343  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHPort
	I0122 21:15:41.523644  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:41.523853  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:41.524034  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHUsername
	I0122 21:15:41.524234  300719 main.go:141] libmachine: Using SSH client type: native
	I0122 21:15:41.524560  300719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0122 21:15:41.524585  300719 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:15:41.951847  300719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:15:41.951901  300719 main.go:141] libmachine: Checking connection to Docker...
	I0122 21:15:41.951911  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetURL
	I0122 21:15:41.953411  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | using libvirt version 6000000
	I0122 21:15:41.955992  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.956360  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:41.956396  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.956641  300719 main.go:141] libmachine: Docker is up and running!
	I0122 21:15:41.956667  300719 main.go:141] libmachine: Reticulating splines...
	I0122 21:15:41.956677  300719 client.go:171] duration metric: took 30.84177986s to LocalClient.Create
	I0122 21:15:41.956711  300719 start.go:167] duration metric: took 30.841874822s to libmachine.API.Create "enable-default-cni-804887"
	I0122 21:15:41.956726  300719 start.go:293] postStartSetup for "enable-default-cni-804887" (driver="kvm2")
	I0122 21:15:41.956742  300719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:15:41.956768  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .DriverName
	I0122 21:15:41.957077  300719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:15:41.957114  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHHostname
	I0122 21:15:41.959805  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.960284  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:41.960320  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:41.960553  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHPort
	I0122 21:15:41.960794  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:41.961030  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHUsername
	I0122 21:15:41.961175  300719 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/enable-default-cni-804887/id_rsa Username:docker}
	I0122 21:15:42.048078  300719 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:15:42.053744  300719 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:15:42.053782  300719 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:15:42.053883  300719 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:15:42.054012  300719 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:15:42.054197  300719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:15:42.066275  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:15:42.097692  300719 start.go:296] duration metric: took 140.944039ms for postStartSetup
	I0122 21:15:42.097784  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetConfigRaw
	I0122 21:15:42.098498  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetIP
	I0122 21:15:42.101908  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:42.102325  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:42.102361  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:42.102681  300719 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/config.json ...
	I0122 21:15:42.102938  300719 start.go:128] duration metric: took 31.012911734s to createHost
	I0122 21:15:42.102974  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHHostname
	I0122 21:15:42.105805  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:42.106207  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:42.106243  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:42.106413  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHPort
	I0122 21:15:42.106661  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:42.106827  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:42.106960  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHUsername
	I0122 21:15:42.107110  300719 main.go:141] libmachine: Using SSH client type: native
	I0122 21:15:42.107366  300719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0122 21:15:42.107383  300719 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:15:42.211753  300719 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737580542.197542019
	
	I0122 21:15:42.211785  300719 fix.go:216] guest clock: 1737580542.197542019
	I0122 21:15:42.211794  300719 fix.go:229] Guest: 2025-01-22 21:15:42.197542019 +0000 UTC Remote: 2025-01-22 21:15:42.10295688 +0000 UTC m=+31.160128175 (delta=94.585139ms)
	I0122 21:15:42.211824  300719 fix.go:200] guest clock delta is within tolerance: 94.585139ms
	I0122 21:15:42.211831  300719 start.go:83] releasing machines lock for "enable-default-cni-804887", held for 31.121936601s
	I0122 21:15:42.211865  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .DriverName
	I0122 21:15:42.212218  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetIP
	I0122 21:15:42.215311  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:42.215631  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:42.215670  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:42.215883  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .DriverName
	I0122 21:15:42.216648  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .DriverName
	I0122 21:15:42.216891  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .DriverName
	I0122 21:15:42.217006  300719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:15:42.217054  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHHostname
	I0122 21:15:42.217128  300719 ssh_runner.go:195] Run: cat /version.json
	I0122 21:15:42.217175  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHHostname
	I0122 21:15:42.220242  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:42.220360  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:42.220598  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:42.220625  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:42.220843  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHPort
	I0122 21:15:42.220889  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:42.220934  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:42.221031  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:42.221099  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHPort
	I0122 21:15:42.221228  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHUsername
	I0122 21:15:42.221243  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHKeyPath
	I0122 21:15:42.221399  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetSSHUsername
	I0122 21:15:42.221398  300719 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/enable-default-cni-804887/id_rsa Username:docker}
	I0122 21:15:42.221525  300719 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/enable-default-cni-804887/id_rsa Username:docker}
	I0122 21:15:42.305377  300719 ssh_runner.go:195] Run: systemctl --version
	I0122 21:15:42.328796  300719 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:15:42.495248  300719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:15:42.502967  300719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:15:42.503055  300719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:15:42.523657  300719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:15:42.523700  300719 start.go:495] detecting cgroup driver to use...
	I0122 21:15:42.523799  300719 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:15:42.543208  300719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:15:42.560695  300719 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:15:42.560765  300719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:15:42.579301  300719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:15:42.596895  300719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:15:42.724570  300719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:15:42.903012  300719 docker.go:233] disabling docker service ...
	I0122 21:15:42.903112  300719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:15:42.922091  300719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:15:42.940034  300719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:15:43.064290  300719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:15:43.199661  300719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:15:43.215787  300719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:15:43.239180  300719 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0122 21:15:43.239247  300719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:43.251432  300719 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:15:43.251513  300719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:43.263641  300719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:43.277547  300719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:43.291560  300719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:15:43.305381  300719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:43.320100  300719 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:43.343968  300719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:43.356726  300719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:15:43.369845  300719 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:15:43.369923  300719 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:15:43.387139  300719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:15:43.399391  300719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:15:43.523155  300719 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:15:43.629944  300719 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:15:43.630023  300719 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:15:43.636196  300719 start.go:563] Will wait 60s for crictl version
	I0122 21:15:43.636279  300719 ssh_runner.go:195] Run: which crictl
	I0122 21:15:43.640834  300719 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:15:43.684272  300719 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:15:43.684406  300719 ssh_runner.go:195] Run: crio --version
	I0122 21:15:43.715395  300719 ssh_runner.go:195] Run: crio --version
	I0122 21:15:43.749250  300719 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0122 21:15:40.953979  303786 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:15:40.954052  303786 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0122 21:15:40.954069  303786 cache.go:56] Caching tarball of preloaded images
	I0122 21:15:40.954299  303786 preload.go:172] Found /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0122 21:15:40.954317  303786 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0122 21:15:40.954476  303786 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/config.json ...
	I0122 21:15:40.954514  303786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/config.json: {Name:mkbbfa4f9ed07fa68ab28eed88564836f0ecfc57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:15:40.954754  303786 start.go:360] acquireMachinesLock for bridge-804887: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:15:43.750784  300719 main.go:141] libmachine: (enable-default-cni-804887) Calling .GetIP
	I0122 21:15:43.753728  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:43.754107  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:87:43", ip: ""} in network mk-enable-default-cni-804887: {Iface:virbr1 ExpiryTime:2025-01-22 22:15:29 +0000 UTC Type:0 Mac:52:54:00:54:87:43 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:enable-default-cni-804887 Clientid:01:52:54:00:54:87:43}
	I0122 21:15:43.754144  300719 main.go:141] libmachine: (enable-default-cni-804887) DBG | domain enable-default-cni-804887 has defined IP address 192.168.39.32 and MAC address 52:54:00:54:87:43 in network mk-enable-default-cni-804887
	I0122 21:15:43.754396  300719 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0122 21:15:43.759084  300719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:15:43.773862  300719 kubeadm.go:883] updating cluster {Name:enable-default-cni-804887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-804887 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:15:43.773978  300719 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:15:43.774021  300719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:15:43.814397  300719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0122 21:15:43.814494  300719 ssh_runner.go:195] Run: which lz4
	I0122 21:15:43.819235  300719 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:15:43.823961  300719 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:15:43.824027  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0122 21:15:45.483269  300719 crio.go:462] duration metric: took 1.664084651s to copy over tarball
	I0122 21:15:45.483378  300719 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:15:49.616278  302322 start.go:364] duration metric: took 24.492196938s to acquireMachinesLock for "flannel-804887"
	I0122 21:15:49.616369  302322 start.go:93] Provisioning new machine with config: &{Name:flannel-804887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-804887 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 21:15:49.616501  302322 start.go:125] createHost starting for "" (driver="kvm2")
	I0122 21:15:48.026969  300719 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.543555601s)
	I0122 21:15:48.027002  300719 crio.go:469] duration metric: took 2.54368935s to extract the tarball
	I0122 21:15:48.027010  300719 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:15:48.067978  300719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:15:48.120645  300719 crio.go:514] all images are preloaded for cri-o runtime.
	I0122 21:15:48.120672  300719 cache_images.go:84] Images are preloaded, skipping loading
	I0122 21:15:48.120682  300719 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.32.1 crio true true} ...
	I0122 21:15:48.120791  300719 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-804887 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-804887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0122 21:15:48.120862  300719 ssh_runner.go:195] Run: crio config
	I0122 21:15:48.181423  300719 cni.go:84] Creating CNI manager for "bridge"
	I0122 21:15:48.181461  300719 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 21:15:48.181495  300719 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-804887 NodeName:enable-default-cni-804887 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:15:48.181675  300719 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-804887"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.32"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:15:48.181765  300719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:15:48.193688  300719 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:15:48.193768  300719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:15:48.205478  300719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0122 21:15:48.225301  300719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:15:48.244752  300719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0122 21:15:48.264410  300719 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0122 21:15:48.269000  300719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:15:48.286037  300719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:15:48.428318  300719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:15:48.448171  300719 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887 for IP: 192.168.39.32
	I0122 21:15:48.448198  300719 certs.go:194] generating shared ca certs ...
	I0122 21:15:48.448218  300719 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:15:48.448408  300719 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:15:48.448453  300719 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:15:48.448465  300719 certs.go:256] generating profile certs ...
	I0122 21:15:48.448543  300719 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.key
	I0122 21:15:48.448563  300719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt with IP's: []
	I0122 21:15:48.584934  300719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt ...
	I0122 21:15:48.584973  300719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: {Name:mka87ee351cbe48a5275700a67f69a0f4c6bbece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:15:48.585168  300719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.key ...
	I0122 21:15:48.585188  300719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.key: {Name:mkafa7af0e566151d1eaddd9e729128155e463e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:15:48.585277  300719 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/apiserver.key.97a0d18a
	I0122 21:15:48.585294  300719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/apiserver.crt.97a0d18a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.32]
	I0122 21:15:48.855828  300719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/apiserver.crt.97a0d18a ...
	I0122 21:15:48.855866  300719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/apiserver.crt.97a0d18a: {Name:mkbdbe1d10fef8d350965b62db234eb892a2b36a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:15:48.856087  300719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/apiserver.key.97a0d18a ...
	I0122 21:15:48.856110  300719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/apiserver.key.97a0d18a: {Name:mk3517929a2f2fc58eb034de44856c47e67a7e7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:15:48.856234  300719 certs.go:381] copying /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/apiserver.crt.97a0d18a -> /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/apiserver.crt
	I0122 21:15:48.856370  300719 certs.go:385] copying /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/apiserver.key.97a0d18a -> /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/apiserver.key
	I0122 21:15:48.856449  300719 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/proxy-client.key
	I0122 21:15:48.856475  300719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/proxy-client.crt with IP's: []
	I0122 21:15:49.006731  300719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/proxy-client.crt ...
	I0122 21:15:49.006764  300719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/proxy-client.crt: {Name:mkee35c1feb519a8a4db91a92f6208632b46087a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:15:49.006942  300719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/proxy-client.key ...
	I0122 21:15:49.006956  300719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/proxy-client.key: {Name:mk45b1d8229602084bd331496a8dbd7823a7995b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:15:49.007195  300719 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:15:49.007252  300719 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:15:49.007269  300719 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:15:49.007295  300719 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:15:49.007319  300719 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:15:49.007342  300719 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:15:49.007381  300719 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:15:49.008010  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:15:49.045458  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:15:49.088269  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:15:49.120359  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:15:49.165813  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0122 21:15:49.204481  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:15:49.238962  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:15:49.269389  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:15:49.298301  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:15:49.327141  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:15:49.362967  300719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:15:49.392528  300719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:15:49.414771  300719 ssh_runner.go:195] Run: openssl version
	I0122 21:15:49.422444  300719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:15:49.439254  300719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:15:49.445414  300719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:15:49.445504  300719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:15:49.454807  300719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:15:49.471961  300719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:15:49.487065  300719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:15:49.494004  300719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:15:49.494083  300719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:15:49.501418  300719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:15:49.514733  300719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:15:49.528575  300719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:15:49.534323  300719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:15:49.534411  300719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:15:49.542911  300719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:15:49.557295  300719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:15:49.562734  300719 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0122 21:15:49.562798  300719 kubeadm.go:392] StartCluster: {Name:enable-default-cni-804887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-804887 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:15:49.562894  300719 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:15:49.562960  300719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:15:49.603188  300719 cri.go:89] found id: ""
	I0122 21:15:49.603291  300719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:15:49.615617  300719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:15:49.630768  300719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:15:49.642961  300719 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:15:49.642990  300719 kubeadm.go:157] found existing configuration files:
	
	I0122 21:15:49.643047  300719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:15:49.655116  300719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:15:49.655196  300719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:15:49.667044  300719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:15:49.678669  300719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:15:49.678742  300719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:15:49.690716  300719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:15:49.702247  300719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:15:49.702319  300719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:15:49.715744  300719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:15:49.730717  300719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:15:49.730793  300719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:15:49.746728  300719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:15:49.815907  300719 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0122 21:15:49.815995  300719 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:15:50.001567  300719 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:15:50.001746  300719 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:15:50.001916  300719 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0122 21:15:50.025011  300719 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:15:49.710121  302322 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0122 21:15:49.710432  302322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:15:49.710504  302322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:15:49.728650  302322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44671
	I0122 21:15:49.729236  302322 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:15:49.729967  302322 main.go:141] libmachine: Using API Version  1
	I0122 21:15:49.729999  302322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:15:49.730463  302322 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:15:49.730689  302322 main.go:141] libmachine: (flannel-804887) Calling .GetMachineName
	I0122 21:15:49.730891  302322 main.go:141] libmachine: (flannel-804887) Calling .DriverName
	I0122 21:15:49.731109  302322 start.go:159] libmachine.API.Create for "flannel-804887" (driver="kvm2")
	I0122 21:15:49.731145  302322 client.go:168] LocalClient.Create starting
	I0122 21:15:49.731181  302322 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem
	I0122 21:15:49.731229  302322 main.go:141] libmachine: Decoding PEM data...
	I0122 21:15:49.731250  302322 main.go:141] libmachine: Parsing certificate...
	I0122 21:15:49.731324  302322 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem
	I0122 21:15:49.731351  302322 main.go:141] libmachine: Decoding PEM data...
	I0122 21:15:49.731375  302322 main.go:141] libmachine: Parsing certificate...
	I0122 21:15:49.731401  302322 main.go:141] libmachine: Running pre-create checks...
	I0122 21:15:49.731413  302322 main.go:141] libmachine: (flannel-804887) Calling .PreCreateCheck
	I0122 21:15:49.731818  302322 main.go:141] libmachine: (flannel-804887) Calling .GetConfigRaw
	I0122 21:15:49.732306  302322 main.go:141] libmachine: Creating machine...
	I0122 21:15:49.732324  302322 main.go:141] libmachine: (flannel-804887) Calling .Create
	I0122 21:15:49.732451  302322 main.go:141] libmachine: (flannel-804887) creating KVM machine...
	I0122 21:15:49.732465  302322 main.go:141] libmachine: (flannel-804887) creating network...
	I0122 21:15:49.733920  302322 main.go:141] libmachine: (flannel-804887) DBG | found existing default KVM network
	I0122 21:15:49.735334  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:49.735097  303889 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1a:77:9d} reservation:<nil>}
	I0122 21:15:49.736274  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:49.736167  303889 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bf0}
	I0122 21:15:49.736302  302322 main.go:141] libmachine: (flannel-804887) DBG | created network xml: 
	I0122 21:15:49.736350  302322 main.go:141] libmachine: (flannel-804887) DBG | <network>
	I0122 21:15:49.736372  302322 main.go:141] libmachine: (flannel-804887) DBG |   <name>mk-flannel-804887</name>
	I0122 21:15:49.736382  302322 main.go:141] libmachine: (flannel-804887) DBG |   <dns enable='no'/>
	I0122 21:15:49.736392  302322 main.go:141] libmachine: (flannel-804887) DBG |   
	I0122 21:15:49.736402  302322 main.go:141] libmachine: (flannel-804887) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0122 21:15:49.736413  302322 main.go:141] libmachine: (flannel-804887) DBG |     <dhcp>
	I0122 21:15:49.736425  302322 main.go:141] libmachine: (flannel-804887) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0122 21:15:49.736443  302322 main.go:141] libmachine: (flannel-804887) DBG |     </dhcp>
	I0122 21:15:49.736454  302322 main.go:141] libmachine: (flannel-804887) DBG |   </ip>
	I0122 21:15:49.736466  302322 main.go:141] libmachine: (flannel-804887) DBG |   
	I0122 21:15:49.736475  302322 main.go:141] libmachine: (flannel-804887) DBG | </network>
	I0122 21:15:49.736497  302322 main.go:141] libmachine: (flannel-804887) DBG | 
	I0122 21:15:49.877468  302322 main.go:141] libmachine: (flannel-804887) DBG | trying to create private KVM network mk-flannel-804887 192.168.50.0/24...
	I0122 21:15:49.976232  302322 main.go:141] libmachine: (flannel-804887) DBG | private KVM network mk-flannel-804887 192.168.50.0/24 created
	I0122 21:15:49.976274  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:49.976186  303889 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:15:49.976293  302322 main.go:141] libmachine: (flannel-804887) setting up store path in /home/jenkins/minikube-integration/20288-247142/.minikube/machines/flannel-804887 ...
	I0122 21:15:49.976304  302322 main.go:141] libmachine: (flannel-804887) building disk image from file:///home/jenkins/minikube-integration/20288-247142/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0122 21:15:49.976325  302322 main.go:141] libmachine: (flannel-804887) Downloading /home/jenkins/minikube-integration/20288-247142/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20288-247142/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0122 21:15:50.143031  300719 out.go:235]   - Generating certificates and keys ...
	I0122 21:15:50.143176  300719 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:15:50.143265  300719 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:15:50.344220  300719 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0122 21:15:50.452769  300719 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0122 21:15:50.500183  300719 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0122 21:15:50.746408  300719 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0122 21:15:51.003817  300719 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0122 21:15:51.004091  300719 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-804887 localhost] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0122 21:15:51.240568  300719 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0122 21:15:51.240798  300719 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-804887 localhost] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0122 21:15:51.451756  300719 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0122 21:15:51.949664  300719 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0122 21:15:52.179629  300719 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0122 21:15:52.179728  300719 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:15:52.291602  300719 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:15:52.400070  300719 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0122 21:15:52.687225  300719 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:15:52.773004  300719 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:15:52.918989  300719 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:15:52.919659  300719 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:15:52.922410  300719 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:15:49.330950  301555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:15:49.330985  301555 machine.go:96] duration metric: took 7.092272543s to provisionDockerMachine
	I0122 21:15:49.331000  301555 start.go:293] postStartSetup for "kubernetes-upgrade-168719" (driver="kvm2")
	I0122 21:15:49.331011  301555 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:15:49.331033  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:15:49.331457  301555 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:15:49.331495  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:15:49.334395  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:49.334819  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:49.334852  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:49.335082  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:15:49.335386  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:49.335602  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:15:49.335753  301555 sshutil.go:53] new ssh client: &{IP:192.168.72.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/id_rsa Username:docker}
	I0122 21:15:49.434844  301555 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:15:49.440565  301555 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:15:49.440605  301555 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:15:49.440684  301555 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:15:49.440781  301555 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:15:49.440917  301555 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:15:49.452916  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:15:49.487566  301555 start.go:296] duration metric: took 156.549178ms for postStartSetup
	I0122 21:15:49.487621  301555 fix.go:56] duration metric: took 7.275604861s for fixHost
	I0122 21:15:49.487650  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:15:49.491001  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:49.491436  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:49.491474  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:49.491640  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:15:49.491897  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:49.492087  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:49.492269  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:15:49.492443  301555 main.go:141] libmachine: Using SSH client type: native
	I0122 21:15:49.492621  301555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.121 22 <nil> <nil>}
	I0122 21:15:49.492631  301555 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:15:49.616073  301555 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737580549.610024328
	
	I0122 21:15:49.616105  301555 fix.go:216] guest clock: 1737580549.610024328
	I0122 21:15:49.616116  301555 fix.go:229] Guest: 2025-01-22 21:15:49.610024328 +0000 UTC Remote: 2025-01-22 21:15:49.48762626 +0000 UTC m=+31.309812970 (delta=122.398068ms)
	I0122 21:15:49.616156  301555 fix.go:200] guest clock delta is within tolerance: 122.398068ms
	I0122 21:15:49.616170  301555 start.go:83] releasing machines lock for "kubernetes-upgrade-168719", held for 7.404192861s
	I0122 21:15:49.616207  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:15:49.616556  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetIP
	I0122 21:15:49.620311  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:49.620686  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:49.620723  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:49.620936  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:15:49.621574  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:15:49.621798  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .DriverName
	I0122 21:15:49.621949  301555 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:15:49.622012  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:15:49.622047  301555 ssh_runner.go:195] Run: cat /version.json
	I0122 21:15:49.622078  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHHostname
	I0122 21:15:49.625059  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:49.625161  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:49.625477  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:49.625508  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:49.625609  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:49.625637  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:49.625885  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:15:49.625921  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHPort
	I0122 21:15:49.626115  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:49.626120  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHKeyPath
	I0122 21:15:49.626297  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:15:49.626422  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetSSHUsername
	I0122 21:15:49.626455  301555 sshutil.go:53] new ssh client: &{IP:192.168.72.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/id_rsa Username:docker}
	I0122 21:15:49.626540  301555 sshutil.go:53] new ssh client: &{IP:192.168.72.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/kubernetes-upgrade-168719/id_rsa Username:docker}
	I0122 21:15:49.724567  301555 ssh_runner.go:195] Run: systemctl --version
	I0122 21:15:49.745636  301555 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:15:49.917270  301555 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:15:49.937792  301555 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:15:49.937908  301555 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:15:49.951324  301555 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0122 21:15:49.951359  301555 start.go:495] detecting cgroup driver to use...
	I0122 21:15:49.951435  301555 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:15:49.975837  301555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:15:49.999843  301555 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:15:49.999919  301555 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:15:50.017133  301555 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:15:50.038874  301555 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:15:50.200762  301555 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:15:50.366109  301555 docker.go:233] disabling docker service ...
	I0122 21:15:50.366260  301555 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:15:50.392615  301555 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:15:50.411500  301555 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:15:50.581737  301555 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:15:50.739137  301555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:15:50.758663  301555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:15:50.780723  301555 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0122 21:15:50.780808  301555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:50.798417  301555 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:15:50.798511  301555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:50.815659  301555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:50.833556  301555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:50.850520  301555 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:15:50.867023  301555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:50.880616  301555 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:50.896728  301555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:15:50.912906  301555 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:15:50.928371  301555 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:15:50.943255  301555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:15:51.143757  301555 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:15:50.356466  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:50.356320  303889 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/flannel-804887/id_rsa...
	I0122 21:15:50.614931  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:50.614775  303889 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/flannel-804887/flannel-804887.rawdisk...
	I0122 21:15:50.614967  302322 main.go:141] libmachine: (flannel-804887) DBG | Writing magic tar header
	I0122 21:15:50.614986  302322 main.go:141] libmachine: (flannel-804887) DBG | Writing SSH key tar header
	I0122 21:15:50.615000  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:50.614950  303889 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20288-247142/.minikube/machines/flannel-804887 ...
	I0122 21:15:50.615132  302322 main.go:141] libmachine: (flannel-804887) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/flannel-804887
	I0122 21:15:50.615156  302322 main.go:141] libmachine: (flannel-804887) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142/.minikube/machines
	I0122 21:15:50.615170  302322 main.go:141] libmachine: (flannel-804887) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:15:50.615183  302322 main.go:141] libmachine: (flannel-804887) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142
	I0122 21:15:50.615194  302322 main.go:141] libmachine: (flannel-804887) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0122 21:15:50.615202  302322 main.go:141] libmachine: (flannel-804887) DBG | checking permissions on dir: /home/jenkins
	I0122 21:15:50.615211  302322 main.go:141] libmachine: (flannel-804887) DBG | checking permissions on dir: /home
	I0122 21:15:50.615230  302322 main.go:141] libmachine: (flannel-804887) DBG | skipping /home - not owner
	I0122 21:15:50.615248  302322 main.go:141] libmachine: (flannel-804887) setting executable bit set on /home/jenkins/minikube-integration/20288-247142/.minikube/machines/flannel-804887 (perms=drwx------)
	I0122 21:15:50.615264  302322 main.go:141] libmachine: (flannel-804887) setting executable bit set on /home/jenkins/minikube-integration/20288-247142/.minikube/machines (perms=drwxr-xr-x)
	I0122 21:15:50.615282  302322 main.go:141] libmachine: (flannel-804887) setting executable bit set on /home/jenkins/minikube-integration/20288-247142/.minikube (perms=drwxr-xr-x)
	I0122 21:15:50.615297  302322 main.go:141] libmachine: (flannel-804887) setting executable bit set on /home/jenkins/minikube-integration/20288-247142 (perms=drwxrwxr-x)
	I0122 21:15:50.615307  302322 main.go:141] libmachine: (flannel-804887) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0122 21:15:50.615321  302322 main.go:141] libmachine: (flannel-804887) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0122 21:15:50.615331  302322 main.go:141] libmachine: (flannel-804887) creating domain...
	I0122 21:15:50.616714  302322 main.go:141] libmachine: (flannel-804887) define libvirt domain using xml: 
	I0122 21:15:50.616742  302322 main.go:141] libmachine: (flannel-804887) <domain type='kvm'>
	I0122 21:15:50.616752  302322 main.go:141] libmachine: (flannel-804887)   <name>flannel-804887</name>
	I0122 21:15:50.616760  302322 main.go:141] libmachine: (flannel-804887)   <memory unit='MiB'>3072</memory>
	I0122 21:15:50.616768  302322 main.go:141] libmachine: (flannel-804887)   <vcpu>2</vcpu>
	I0122 21:15:50.616775  302322 main.go:141] libmachine: (flannel-804887)   <features>
	I0122 21:15:50.616783  302322 main.go:141] libmachine: (flannel-804887)     <acpi/>
	I0122 21:15:50.616801  302322 main.go:141] libmachine: (flannel-804887)     <apic/>
	I0122 21:15:50.616812  302322 main.go:141] libmachine: (flannel-804887)     <pae/>
	I0122 21:15:50.616819  302322 main.go:141] libmachine: (flannel-804887)     
	I0122 21:15:50.616866  302322 main.go:141] libmachine: (flannel-804887)   </features>
	I0122 21:15:50.616901  302322 main.go:141] libmachine: (flannel-804887)   <cpu mode='host-passthrough'>
	I0122 21:15:50.616914  302322 main.go:141] libmachine: (flannel-804887)   
	I0122 21:15:50.616921  302322 main.go:141] libmachine: (flannel-804887)   </cpu>
	I0122 21:15:50.616929  302322 main.go:141] libmachine: (flannel-804887)   <os>
	I0122 21:15:50.616938  302322 main.go:141] libmachine: (flannel-804887)     <type>hvm</type>
	I0122 21:15:50.616946  302322 main.go:141] libmachine: (flannel-804887)     <boot dev='cdrom'/>
	I0122 21:15:50.616956  302322 main.go:141] libmachine: (flannel-804887)     <boot dev='hd'/>
	I0122 21:15:50.616965  302322 main.go:141] libmachine: (flannel-804887)     <bootmenu enable='no'/>
	I0122 21:15:50.616971  302322 main.go:141] libmachine: (flannel-804887)   </os>
	I0122 21:15:50.616985  302322 main.go:141] libmachine: (flannel-804887)   <devices>
	I0122 21:15:50.616997  302322 main.go:141] libmachine: (flannel-804887)     <disk type='file' device='cdrom'>
	I0122 21:15:50.617028  302322 main.go:141] libmachine: (flannel-804887)       <source file='/home/jenkins/minikube-integration/20288-247142/.minikube/machines/flannel-804887/boot2docker.iso'/>
	I0122 21:15:50.617040  302322 main.go:141] libmachine: (flannel-804887)       <target dev='hdc' bus='scsi'/>
	I0122 21:15:50.617048  302322 main.go:141] libmachine: (flannel-804887)       <readonly/>
	I0122 21:15:50.617054  302322 main.go:141] libmachine: (flannel-804887)     </disk>
	I0122 21:15:50.617070  302322 main.go:141] libmachine: (flannel-804887)     <disk type='file' device='disk'>
	I0122 21:15:50.617083  302322 main.go:141] libmachine: (flannel-804887)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0122 21:15:50.617106  302322 main.go:141] libmachine: (flannel-804887)       <source file='/home/jenkins/minikube-integration/20288-247142/.minikube/machines/flannel-804887/flannel-804887.rawdisk'/>
	I0122 21:15:50.617117  302322 main.go:141] libmachine: (flannel-804887)       <target dev='hda' bus='virtio'/>
	I0122 21:15:50.617123  302322 main.go:141] libmachine: (flannel-804887)     </disk>
	I0122 21:15:50.617130  302322 main.go:141] libmachine: (flannel-804887)     <interface type='network'>
	I0122 21:15:50.617138  302322 main.go:141] libmachine: (flannel-804887)       <source network='mk-flannel-804887'/>
	I0122 21:15:50.617149  302322 main.go:141] libmachine: (flannel-804887)       <model type='virtio'/>
	I0122 21:15:50.617157  302322 main.go:141] libmachine: (flannel-804887)     </interface>
	I0122 21:15:50.617168  302322 main.go:141] libmachine: (flannel-804887)     <interface type='network'>
	I0122 21:15:50.617176  302322 main.go:141] libmachine: (flannel-804887)       <source network='default'/>
	I0122 21:15:50.617186  302322 main.go:141] libmachine: (flannel-804887)       <model type='virtio'/>
	I0122 21:15:50.617193  302322 main.go:141] libmachine: (flannel-804887)     </interface>
	I0122 21:15:50.617204  302322 main.go:141] libmachine: (flannel-804887)     <serial type='pty'>
	I0122 21:15:50.617212  302322 main.go:141] libmachine: (flannel-804887)       <target port='0'/>
	I0122 21:15:50.617218  302322 main.go:141] libmachine: (flannel-804887)     </serial>
	I0122 21:15:50.617269  302322 main.go:141] libmachine: (flannel-804887)     <console type='pty'>
	I0122 21:15:50.617286  302322 main.go:141] libmachine: (flannel-804887)       <target type='serial' port='0'/>
	I0122 21:15:50.617294  302322 main.go:141] libmachine: (flannel-804887)     </console>
	I0122 21:15:50.617301  302322 main.go:141] libmachine: (flannel-804887)     <rng model='virtio'>
	I0122 21:15:50.617314  302322 main.go:141] libmachine: (flannel-804887)       <backend model='random'>/dev/random</backend>
	I0122 21:15:50.617323  302322 main.go:141] libmachine: (flannel-804887)     </rng>
	I0122 21:15:50.617330  302322 main.go:141] libmachine: (flannel-804887)     
	I0122 21:15:50.617340  302322 main.go:141] libmachine: (flannel-804887)     
	I0122 21:15:50.617348  302322 main.go:141] libmachine: (flannel-804887)   </devices>
	I0122 21:15:50.617361  302322 main.go:141] libmachine: (flannel-804887) </domain>
	I0122 21:15:50.617378  302322 main.go:141] libmachine: (flannel-804887) 
	I0122 21:15:50.716470  302322 main.go:141] libmachine: (flannel-804887) DBG | domain flannel-804887 has defined MAC address 52:54:00:6c:54:d5 in network default
	I0122 21:15:50.717198  302322 main.go:141] libmachine: (flannel-804887) DBG | domain flannel-804887 has defined MAC address 52:54:00:4c:43:c8 in network mk-flannel-804887
	I0122 21:15:50.717224  302322 main.go:141] libmachine: (flannel-804887) starting domain...
	I0122 21:15:50.717252  302322 main.go:141] libmachine: (flannel-804887) ensuring networks are active...
	I0122 21:15:50.718222  302322 main.go:141] libmachine: (flannel-804887) Ensuring network default is active
	I0122 21:15:50.718602  302322 main.go:141] libmachine: (flannel-804887) Ensuring network mk-flannel-804887 is active
	I0122 21:15:50.719235  302322 main.go:141] libmachine: (flannel-804887) getting domain XML...
	I0122 21:15:50.720030  302322 main.go:141] libmachine: (flannel-804887) creating domain...
	I0122 21:15:52.081287  302322 main.go:141] libmachine: (flannel-804887) waiting for IP...
	I0122 21:15:52.082150  302322 main.go:141] libmachine: (flannel-804887) DBG | domain flannel-804887 has defined MAC address 52:54:00:4c:43:c8 in network mk-flannel-804887
	I0122 21:15:52.082692  302322 main.go:141] libmachine: (flannel-804887) DBG | unable to find current IP address of domain flannel-804887 in network mk-flannel-804887
	I0122 21:15:52.082761  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:52.082702  303889 retry.go:31] will retry after 277.889267ms: waiting for domain to come up
	I0122 21:15:52.362323  302322 main.go:141] libmachine: (flannel-804887) DBG | domain flannel-804887 has defined MAC address 52:54:00:4c:43:c8 in network mk-flannel-804887
	I0122 21:15:52.362903  302322 main.go:141] libmachine: (flannel-804887) DBG | unable to find current IP address of domain flannel-804887 in network mk-flannel-804887
	I0122 21:15:52.362938  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:52.362849  303889 retry.go:31] will retry after 341.027668ms: waiting for domain to come up
	I0122 21:15:52.705416  302322 main.go:141] libmachine: (flannel-804887) DBG | domain flannel-804887 has defined MAC address 52:54:00:4c:43:c8 in network mk-flannel-804887
	I0122 21:15:52.705915  302322 main.go:141] libmachine: (flannel-804887) DBG | unable to find current IP address of domain flannel-804887 in network mk-flannel-804887
	I0122 21:15:52.705942  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:52.705876  303889 retry.go:31] will retry after 460.236078ms: waiting for domain to come up
	I0122 21:15:53.167377  302322 main.go:141] libmachine: (flannel-804887) DBG | domain flannel-804887 has defined MAC address 52:54:00:4c:43:c8 in network mk-flannel-804887
	I0122 21:15:53.167915  302322 main.go:141] libmachine: (flannel-804887) DBG | unable to find current IP address of domain flannel-804887 in network mk-flannel-804887
	I0122 21:15:53.167952  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:53.167878  303889 retry.go:31] will retry after 585.683872ms: waiting for domain to come up
	I0122 21:15:53.755809  302322 main.go:141] libmachine: (flannel-804887) DBG | domain flannel-804887 has defined MAC address 52:54:00:4c:43:c8 in network mk-flannel-804887
	I0122 21:15:53.756435  302322 main.go:141] libmachine: (flannel-804887) DBG | unable to find current IP address of domain flannel-804887 in network mk-flannel-804887
	I0122 21:15:53.756467  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:53.756403  303889 retry.go:31] will retry after 586.933083ms: waiting for domain to come up
	I0122 21:15:54.345345  302322 main.go:141] libmachine: (flannel-804887) DBG | domain flannel-804887 has defined MAC address 52:54:00:4c:43:c8 in network mk-flannel-804887
	I0122 21:15:54.345977  302322 main.go:141] libmachine: (flannel-804887) DBG | unable to find current IP address of domain flannel-804887 in network mk-flannel-804887
	I0122 21:15:54.346019  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:54.345934  303889 retry.go:31] will retry after 590.586817ms: waiting for domain to come up
	I0122 21:15:54.938352  302322 main.go:141] libmachine: (flannel-804887) DBG | domain flannel-804887 has defined MAC address 52:54:00:4c:43:c8 in network mk-flannel-804887
	I0122 21:15:54.938928  302322 main.go:141] libmachine: (flannel-804887) DBG | unable to find current IP address of domain flannel-804887 in network mk-flannel-804887
	I0122 21:15:54.938987  302322 main.go:141] libmachine: (flannel-804887) DBG | I0122 21:15:54.938898  303889 retry.go:31] will retry after 881.139363ms: waiting for domain to come up
	I0122 21:15:55.181483  301555 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.037677116s)
	I0122 21:15:55.181531  301555 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:15:55.181593  301555 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:15:55.188051  301555 start.go:563] Will wait 60s for crictl version
	I0122 21:15:55.188135  301555 ssh_runner.go:195] Run: which crictl
	I0122 21:15:55.192904  301555 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:15:55.246528  301555 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:15:55.246640  301555 ssh_runner.go:195] Run: crio --version
	I0122 21:15:55.292345  301555 ssh_runner.go:195] Run: crio --version
	I0122 21:15:55.330322  301555 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0122 21:15:52.924587  300719 out.go:235]   - Booting up control plane ...
	I0122 21:15:52.924737  300719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:15:52.924843  300719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:15:52.924944  300719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:15:52.942823  300719 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:15:52.954327  300719 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:15:52.954407  300719 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:15:53.111420  300719 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0122 21:15:53.111621  300719 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0122 21:15:53.613429  300719 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.15698ms
	I0122 21:15:53.613563  300719 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0122 21:15:55.331640  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) Calling .GetIP
	I0122 21:15:55.335422  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:55.335866  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:2b:b3", ip: ""} in network mk-kubernetes-upgrade-168719: {Iface:virbr4 ExpiryTime:2025-01-22 22:14:44 +0000 UTC Type:0 Mac:52:54:00:c1:2b:b3 Iaid: IPaddr:192.168.72.121 Prefix:24 Hostname:kubernetes-upgrade-168719 Clientid:01:52:54:00:c1:2b:b3}
	I0122 21:15:55.335903  301555 main.go:141] libmachine: (kubernetes-upgrade-168719) DBG | domain kubernetes-upgrade-168719 has defined IP address 192.168.72.121 and MAC address 52:54:00:c1:2b:b3 in network mk-kubernetes-upgrade-168719
	I0122 21:15:55.336163  301555 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0122 21:15:55.341336  301555 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-168719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-168719 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.121 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:15:55.341495  301555 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:15:55.341547  301555 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:15:55.401992  301555 crio.go:514] all images are preloaded for cri-o runtime.
	I0122 21:15:55.402024  301555 crio.go:433] Images already preloaded, skipping extraction
	I0122 21:15:55.402094  301555 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:15:55.452919  301555 crio.go:514] all images are preloaded for cri-o runtime.
	I0122 21:15:55.452948  301555 cache_images.go:84] Images are preloaded, skipping loading
	I0122 21:15:55.452956  301555 kubeadm.go:934] updating node { 192.168.72.121 8443 v1.32.1 crio true true} ...
	I0122 21:15:55.453106  301555 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-168719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-168719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:15:55.453196  301555 ssh_runner.go:195] Run: crio config
	I0122 21:15:55.550843  301555 cni.go:84] Creating CNI manager for ""
	I0122 21:15:55.550882  301555 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:15:55.550896  301555 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 21:15:55.550935  301555 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.121 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-168719 NodeName:kubernetes-upgrade-168719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:15:55.551130  301555 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-168719"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.121"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.121"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:15:55.551225  301555 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:15:55.564809  301555 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:15:55.564911  301555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:15:55.580453  301555 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0122 21:15:55.605136  301555 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:15:55.628886  301555 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0122 21:15:55.655823  301555 ssh_runner.go:195] Run: grep 192.168.72.121	control-plane.minikube.internal$ /etc/hosts
	I0122 21:15:55.661523  301555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:15:55.845093  301555 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:15:55.867254  301555 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719 for IP: 192.168.72.121
	I0122 21:15:55.867294  301555 certs.go:194] generating shared ca certs ...
	I0122 21:15:55.867321  301555 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:15:55.867533  301555 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:15:55.867610  301555 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:15:55.867625  301555 certs.go:256] generating profile certs ...
	I0122 21:15:55.867746  301555 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/client.key
	I0122 21:15:55.867811  301555 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.key.db907cc9
	I0122 21:15:55.867864  301555 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/proxy-client.key
	I0122 21:15:55.868019  301555 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:15:55.868063  301555 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:15:55.868077  301555 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:15:55.868114  301555 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:15:55.868151  301555 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:15:55.868182  301555 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:15:55.868315  301555 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:15:55.869314  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:15:55.902046  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:15:55.934669  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:15:55.966742  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:15:55.998886  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0122 21:15:56.029978  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:15:56.061160  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:15:56.089429  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kubernetes-upgrade-168719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:15:56.120940  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:15:56.158420  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:15:56.190086  301555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:15:56.222966  301555 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:15:56.246285  301555 ssh_runner.go:195] Run: openssl version
	I0122 21:15:56.253477  301555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:15:56.271425  301555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:15:56.281799  301555 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:15:56.281881  301555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:15:56.296960  301555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:15:56.325694  301555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:15:56.384333  301555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:15:56.395239  301555 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:15:56.395422  301555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:15:56.436461  301555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:15:56.507937  301555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:15:56.685013  301555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:15:56.764383  301555 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:15:56.764498  301555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:15:56.878839  301555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:15:57.055170  301555 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:15:57.087520  301555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 21:15:57.121479  301555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 21:15:57.152368  301555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 21:15:57.162992  301555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 21:15:57.198076  301555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 21:15:57.245859  301555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 21:15:57.287987  301555 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-168719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-168719 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.121 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:15:57.288116  301555 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:15:57.288192  301555 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:15:57.679159  301555 cri.go:89] found id: "5b741d177e2aae54b7c6536a92d33b39f1896b8e06ea2e1c3d5ffb939d994a3f"
	I0122 21:15:57.679198  301555 cri.go:89] found id: "cb8baa69037321a53f6913d1a7b6f1e2c583af125a2d7de218cb5c60b6271a72"
	I0122 21:15:57.679205  301555 cri.go:89] found id: "faa9be8ef99d493ac7c420461d97d76b1ede0db33e3f1e085dac24afbfedabf5"
	I0122 21:15:57.679228  301555 cri.go:89] found id: "3d3a8dbd6d6bb4cbd267a09a9df3cf08f948cb1aad8c88d2d0e62af08073ff2d"
	I0122 21:15:57.679233  301555 cri.go:89] found id: "dab403f101b08bb5267e74cfdf94d8cae3ce898e13ee2b184e553a5aada093e2"
	I0122 21:15:57.679238  301555 cri.go:89] found id: "8d24c33686f7b21895f532f8fc8cb89000935ac4c9b7140c615434885c4e1035"
	I0122 21:15:57.679242  301555 cri.go:89] found id: "c7a500ed61878dfd0135e75a45d14bfa2048bcf03d25753b551b7fa47c7b5091"
	I0122 21:15:57.679246  301555 cri.go:89] found id: "9fb21ea7e207cddb4efd78aab5408e6291b4e840d9508dde5e4645917e4f4a2f"
	I0122 21:15:57.679250  301555 cri.go:89] found id: "d16b40f8d9a6a0e2b6a565e4b49b7b171e27283eb995f3ddf33f5e77f8921b75"
	I0122 21:15:57.679260  301555 cri.go:89] found id: "90a946700caa93a4469d9799e42b562c4b1423823201445b34905f3740bb7f74"
	I0122 21:15:57.679265  301555 cri.go:89] found id: "ef823b9ade21778e32d11fa7a08a00b08d597a4d9ec83db229145476749c8f6c"
	I0122 21:15:57.679269  301555 cri.go:89] found id: "997810e2f74ab8826863221fba3261eef0aebd1d67622f4ec62f65302c5207fb"
	I0122 21:15:57.679273  301555 cri.go:89] found id: "987119f883826bbf621dbd3947402a1853f1cd3ef9e13d8982d25deb5394a7d4"
	I0122 21:15:57.679278  301555 cri.go:89] found id: ""
	I0122 21:15:57.679341  301555 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-168719 -n kubernetes-upgrade-168719
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-168719 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-168719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-168719
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-168719: (1.046787937s)
--- FAIL: TestKubernetesUpgrade (434.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (298.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-181389 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-181389 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m58.506743179s)

                                                
                                                
-- stdout --
	* [old-k8s-version-181389] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-181389" primary control-plane node in "old-k8s-version-181389" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 21:16:27.348448  304536 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:16:27.349100  304536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:16:27.349118  304536 out.go:358] Setting ErrFile to fd 2...
	I0122 21:16:27.349127  304536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:16:27.349605  304536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:16:27.350710  304536 out.go:352] Setting JSON to false
	I0122 21:16:27.352131  304536 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":14333,"bootTime":1737566254,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:16:27.352262  304536 start.go:139] virtualization: kvm guest
	I0122 21:16:27.354323  304536 out.go:177] * [old-k8s-version-181389] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:16:27.355877  304536 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:16:27.355911  304536 notify.go:220] Checking for updates...
	I0122 21:16:27.357490  304536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:16:27.359039  304536 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:16:27.360540  304536 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:16:27.362008  304536 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:16:27.363637  304536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:16:27.365583  304536 config.go:182] Loaded profile config "bridge-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:16:27.365763  304536 config.go:182] Loaded profile config "enable-default-cni-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:16:27.365871  304536 config.go:182] Loaded profile config "flannel-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:16:27.366019  304536 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:16:27.406950  304536 out.go:177] * Using the kvm2 driver based on user configuration
	I0122 21:16:27.408574  304536 start.go:297] selected driver: kvm2
	I0122 21:16:27.408601  304536 start.go:901] validating driver "kvm2" against <nil>
	I0122 21:16:27.408617  304536 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:16:27.409851  304536 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:16:27.410007  304536 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:16:27.427677  304536 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:16:27.427738  304536 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0122 21:16:27.428019  304536 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:16:27.428059  304536 cni.go:84] Creating CNI manager for ""
	I0122 21:16:27.428123  304536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:16:27.428136  304536 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 21:16:27.428214  304536 start.go:340] cluster config:
	{Name:old-k8s-version-181389 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-181389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:16:27.428327  304536 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:16:27.430566  304536 out.go:177] * Starting "old-k8s-version-181389" primary control-plane node in "old-k8s-version-181389" cluster
	I0122 21:16:27.431914  304536 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0122 21:16:27.431981  304536 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0122 21:16:27.431997  304536 cache.go:56] Caching tarball of preloaded images
	I0122 21:16:27.432124  304536 preload.go:172] Found /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0122 21:16:27.432136  304536 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0122 21:16:27.432312  304536 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/config.json ...
	I0122 21:16:27.432338  304536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/config.json: {Name:mk0e7128cf874df05c61380a988d5eb6bada0ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:16:27.432534  304536 start.go:360] acquireMachinesLock for old-k8s-version-181389: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:16:47.792774  304536 start.go:364] duration metric: took 20.360196146s to acquireMachinesLock for "old-k8s-version-181389"
	I0122 21:16:47.792869  304536 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-181389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-181389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 21:16:47.793052  304536 start.go:125] createHost starting for "" (driver="kvm2")
	I0122 21:16:47.794575  304536 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0122 21:16:47.794850  304536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:16:47.794917  304536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:16:47.816638  304536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40215
	I0122 21:16:47.817197  304536 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:16:47.817973  304536 main.go:141] libmachine: Using API Version  1
	I0122 21:16:47.818003  304536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:16:47.818489  304536 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:16:47.818706  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetMachineName
	I0122 21:16:47.818896  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:16:47.819087  304536 start.go:159] libmachine.API.Create for "old-k8s-version-181389" (driver="kvm2")
	I0122 21:16:47.819128  304536 client.go:168] LocalClient.Create starting
	I0122 21:16:47.819178  304536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem
	I0122 21:16:47.819237  304536 main.go:141] libmachine: Decoding PEM data...
	I0122 21:16:47.819266  304536 main.go:141] libmachine: Parsing certificate...
	I0122 21:16:47.819341  304536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem
	I0122 21:16:47.819368  304536 main.go:141] libmachine: Decoding PEM data...
	I0122 21:16:47.819386  304536 main.go:141] libmachine: Parsing certificate...
	I0122 21:16:47.819413  304536 main.go:141] libmachine: Running pre-create checks...
	I0122 21:16:47.819429  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .PreCreateCheck
	I0122 21:16:47.819841  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetConfigRaw
	I0122 21:16:47.820354  304536 main.go:141] libmachine: Creating machine...
	I0122 21:16:47.820371  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .Create
	I0122 21:16:47.820531  304536 main.go:141] libmachine: (old-k8s-version-181389) creating KVM machine...
	I0122 21:16:47.820555  304536 main.go:141] libmachine: (old-k8s-version-181389) creating network...
	I0122 21:16:47.822051  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found existing default KVM network
	I0122 21:16:47.823473  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:47.823255  304694 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1a:77:9d} reservation:<nil>}
	I0122 21:16:47.824774  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:47.824653  304694 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2b:58:68} reservation:<nil>}
	I0122 21:16:47.826219  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:47.826065  304694 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ca:cc:02} reservation:<nil>}
	I0122 21:16:47.827823  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:47.827699  304694 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030b1b0}
	I0122 21:16:47.827860  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | created network xml: 
	I0122 21:16:47.827873  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | <network>
	I0122 21:16:47.827881  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG |   <name>mk-old-k8s-version-181389</name>
	I0122 21:16:47.827889  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG |   <dns enable='no'/>
	I0122 21:16:47.827895  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG |   
	I0122 21:16:47.827914  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0122 21:16:47.827921  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG |     <dhcp>
	I0122 21:16:47.827930  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0122 21:16:47.827936  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG |     </dhcp>
	I0122 21:16:47.827944  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG |   </ip>
	I0122 21:16:47.827952  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG |   
	I0122 21:16:47.827959  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | </network>
	I0122 21:16:47.827966  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | 
	I0122 21:16:47.834732  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | trying to create private KVM network mk-old-k8s-version-181389 192.168.72.0/24...
	I0122 21:16:47.942740  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | private KVM network mk-old-k8s-version-181389 192.168.72.0/24 created
	I0122 21:16:47.942844  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:47.942694  304694 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:16:47.942884  304536 main.go:141] libmachine: (old-k8s-version-181389) setting up store path in /home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389 ...
	I0122 21:16:47.942913  304536 main.go:141] libmachine: (old-k8s-version-181389) building disk image from file:///home/jenkins/minikube-integration/20288-247142/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0122 21:16:47.942931  304536 main.go:141] libmachine: (old-k8s-version-181389) Downloading /home/jenkins/minikube-integration/20288-247142/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20288-247142/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0122 21:16:48.288974  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:48.288757  304694 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa...
	I0122 21:16:48.366105  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:48.365899  304694 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/old-k8s-version-181389.rawdisk...
	I0122 21:16:48.366147  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | Writing magic tar header
	I0122 21:16:48.366167  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | Writing SSH key tar header
	I0122 21:16:48.366254  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:48.366069  304694 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389 ...
	I0122 21:16:48.366285  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389
	I0122 21:16:48.366308  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142/.minikube/machines
	I0122 21:16:48.366328  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:16:48.366340  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-247142
	I0122 21:16:48.366366  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0122 21:16:48.366386  304536 main.go:141] libmachine: (old-k8s-version-181389) setting executable bit set on /home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389 (perms=drwx------)
	I0122 21:16:48.366398  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | checking permissions on dir: /home/jenkins
	I0122 21:16:48.366412  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | checking permissions on dir: /home
	I0122 21:16:48.366424  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | skipping /home - not owner
	I0122 21:16:48.366449  304536 main.go:141] libmachine: (old-k8s-version-181389) setting executable bit set on /home/jenkins/minikube-integration/20288-247142/.minikube/machines (perms=drwxr-xr-x)
	I0122 21:16:48.366464  304536 main.go:141] libmachine: (old-k8s-version-181389) setting executable bit set on /home/jenkins/minikube-integration/20288-247142/.minikube (perms=drwxr-xr-x)
	I0122 21:16:48.366474  304536 main.go:141] libmachine: (old-k8s-version-181389) setting executable bit set on /home/jenkins/minikube-integration/20288-247142 (perms=drwxrwxr-x)
	I0122 21:16:48.366488  304536 main.go:141] libmachine: (old-k8s-version-181389) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0122 21:16:48.366501  304536 main.go:141] libmachine: (old-k8s-version-181389) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0122 21:16:48.366514  304536 main.go:141] libmachine: (old-k8s-version-181389) creating domain...
	I0122 21:16:48.367931  304536 main.go:141] libmachine: (old-k8s-version-181389) define libvirt domain using xml: 
	I0122 21:16:48.367980  304536 main.go:141] libmachine: (old-k8s-version-181389) <domain type='kvm'>
	I0122 21:16:48.367992  304536 main.go:141] libmachine: (old-k8s-version-181389)   <name>old-k8s-version-181389</name>
	I0122 21:16:48.368000  304536 main.go:141] libmachine: (old-k8s-version-181389)   <memory unit='MiB'>2200</memory>
	I0122 21:16:48.368008  304536 main.go:141] libmachine: (old-k8s-version-181389)   <vcpu>2</vcpu>
	I0122 21:16:48.368019  304536 main.go:141] libmachine: (old-k8s-version-181389)   <features>
	I0122 21:16:48.368027  304536 main.go:141] libmachine: (old-k8s-version-181389)     <acpi/>
	I0122 21:16:48.368037  304536 main.go:141] libmachine: (old-k8s-version-181389)     <apic/>
	I0122 21:16:48.368045  304536 main.go:141] libmachine: (old-k8s-version-181389)     <pae/>
	I0122 21:16:48.368055  304536 main.go:141] libmachine: (old-k8s-version-181389)     
	I0122 21:16:48.368064  304536 main.go:141] libmachine: (old-k8s-version-181389)   </features>
	I0122 21:16:48.368074  304536 main.go:141] libmachine: (old-k8s-version-181389)   <cpu mode='host-passthrough'>
	I0122 21:16:48.368107  304536 main.go:141] libmachine: (old-k8s-version-181389)   
	I0122 21:16:48.368114  304536 main.go:141] libmachine: (old-k8s-version-181389)   </cpu>
	I0122 21:16:48.368121  304536 main.go:141] libmachine: (old-k8s-version-181389)   <os>
	I0122 21:16:48.368133  304536 main.go:141] libmachine: (old-k8s-version-181389)     <type>hvm</type>
	I0122 21:16:48.368140  304536 main.go:141] libmachine: (old-k8s-version-181389)     <boot dev='cdrom'/>
	I0122 21:16:48.368146  304536 main.go:141] libmachine: (old-k8s-version-181389)     <boot dev='hd'/>
	I0122 21:16:48.368155  304536 main.go:141] libmachine: (old-k8s-version-181389)     <bootmenu enable='no'/>
	I0122 21:16:48.368165  304536 main.go:141] libmachine: (old-k8s-version-181389)   </os>
	I0122 21:16:48.368173  304536 main.go:141] libmachine: (old-k8s-version-181389)   <devices>
	I0122 21:16:48.368178  304536 main.go:141] libmachine: (old-k8s-version-181389)     <disk type='file' device='cdrom'>
	I0122 21:16:48.368195  304536 main.go:141] libmachine: (old-k8s-version-181389)       <source file='/home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/boot2docker.iso'/>
	I0122 21:16:48.368203  304536 main.go:141] libmachine: (old-k8s-version-181389)       <target dev='hdc' bus='scsi'/>
	I0122 21:16:48.368212  304536 main.go:141] libmachine: (old-k8s-version-181389)       <readonly/>
	I0122 21:16:48.368218  304536 main.go:141] libmachine: (old-k8s-version-181389)     </disk>
	I0122 21:16:48.368227  304536 main.go:141] libmachine: (old-k8s-version-181389)     <disk type='file' device='disk'>
	I0122 21:16:48.368236  304536 main.go:141] libmachine: (old-k8s-version-181389)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0122 21:16:48.368252  304536 main.go:141] libmachine: (old-k8s-version-181389)       <source file='/home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/old-k8s-version-181389.rawdisk'/>
	I0122 21:16:48.368260  304536 main.go:141] libmachine: (old-k8s-version-181389)       <target dev='hda' bus='virtio'/>
	I0122 21:16:48.368268  304536 main.go:141] libmachine: (old-k8s-version-181389)     </disk>
	I0122 21:16:48.368276  304536 main.go:141] libmachine: (old-k8s-version-181389)     <interface type='network'>
	I0122 21:16:48.368286  304536 main.go:141] libmachine: (old-k8s-version-181389)       <source network='mk-old-k8s-version-181389'/>
	I0122 21:16:48.368297  304536 main.go:141] libmachine: (old-k8s-version-181389)       <model type='virtio'/>
	I0122 21:16:48.368312  304536 main.go:141] libmachine: (old-k8s-version-181389)     </interface>
	I0122 21:16:48.368323  304536 main.go:141] libmachine: (old-k8s-version-181389)     <interface type='network'>
	I0122 21:16:48.368333  304536 main.go:141] libmachine: (old-k8s-version-181389)       <source network='default'/>
	I0122 21:16:48.368343  304536 main.go:141] libmachine: (old-k8s-version-181389)       <model type='virtio'/>
	I0122 21:16:48.368352  304536 main.go:141] libmachine: (old-k8s-version-181389)     </interface>
	I0122 21:16:48.368362  304536 main.go:141] libmachine: (old-k8s-version-181389)     <serial type='pty'>
	I0122 21:16:48.368371  304536 main.go:141] libmachine: (old-k8s-version-181389)       <target port='0'/>
	I0122 21:16:48.368380  304536 main.go:141] libmachine: (old-k8s-version-181389)     </serial>
	I0122 21:16:48.368389  304536 main.go:141] libmachine: (old-k8s-version-181389)     <console type='pty'>
	I0122 21:16:48.368400  304536 main.go:141] libmachine: (old-k8s-version-181389)       <target type='serial' port='0'/>
	I0122 21:16:48.368408  304536 main.go:141] libmachine: (old-k8s-version-181389)     </console>
	I0122 21:16:48.368415  304536 main.go:141] libmachine: (old-k8s-version-181389)     <rng model='virtio'>
	I0122 21:16:48.368424  304536 main.go:141] libmachine: (old-k8s-version-181389)       <backend model='random'>/dev/random</backend>
	I0122 21:16:48.368429  304536 main.go:141] libmachine: (old-k8s-version-181389)     </rng>
	I0122 21:16:48.368436  304536 main.go:141] libmachine: (old-k8s-version-181389)     
	I0122 21:16:48.368443  304536 main.go:141] libmachine: (old-k8s-version-181389)     
	I0122 21:16:48.368451  304536 main.go:141] libmachine: (old-k8s-version-181389)   </devices>
	I0122 21:16:48.368458  304536 main.go:141] libmachine: (old-k8s-version-181389) </domain>
	I0122 21:16:48.368469  304536 main.go:141] libmachine: (old-k8s-version-181389) 
	I0122 21:16:48.373155  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:6c:df:82 in network default
	I0122 21:16:48.374011  304536 main.go:141] libmachine: (old-k8s-version-181389) starting domain...
	I0122 21:16:48.374049  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:16:48.374059  304536 main.go:141] libmachine: (old-k8s-version-181389) ensuring networks are active...
	I0122 21:16:48.375086  304536 main.go:141] libmachine: (old-k8s-version-181389) Ensuring network default is active
	I0122 21:16:48.375541  304536 main.go:141] libmachine: (old-k8s-version-181389) Ensuring network mk-old-k8s-version-181389 is active
	I0122 21:16:48.376293  304536 main.go:141] libmachine: (old-k8s-version-181389) getting domain XML...
	I0122 21:16:48.377207  304536 main.go:141] libmachine: (old-k8s-version-181389) creating domain...
	I0122 21:16:49.898948  304536 main.go:141] libmachine: (old-k8s-version-181389) waiting for IP...
	I0122 21:16:49.900103  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:16:49.900853  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:16:49.900949  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:49.900847  304694 retry.go:31] will retry after 197.157374ms: waiting for domain to come up
	I0122 21:16:50.099601  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:16:50.100392  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:16:50.100426  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:50.100349  304694 retry.go:31] will retry after 250.812027ms: waiting for domain to come up
	I0122 21:16:50.353319  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:16:50.353618  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:16:50.353641  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:50.353615  304694 retry.go:31] will retry after 430.728513ms: waiting for domain to come up
	I0122 21:16:50.786855  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:16:50.786896  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:16:50.787026  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:50.786917  304694 retry.go:31] will retry after 573.488889ms: waiting for domain to come up
	I0122 21:16:51.361895  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:16:51.362695  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:16:51.362733  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:51.362665  304694 retry.go:31] will retry after 554.527628ms: waiting for domain to come up
	I0122 21:16:51.918879  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:16:51.919639  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:16:51.919689  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:51.919585  304694 retry.go:31] will retry after 860.372762ms: waiting for domain to come up
	I0122 21:16:52.781620  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:16:52.782142  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:16:52.782172  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:52.782099  304694 retry.go:31] will retry after 874.575311ms: waiting for domain to come up
	I0122 21:16:53.659093  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:16:53.659755  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:16:53.659810  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:53.659643  304694 retry.go:31] will retry after 1.051382018s: waiting for domain to come up
	I0122 21:16:54.712604  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:16:54.713329  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:16:54.713405  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:54.713310  304694 retry.go:31] will retry after 1.255028354s: waiting for domain to come up
	I0122 21:16:55.970045  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:16:55.970595  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:16:55.970626  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:55.970551  304694 retry.go:31] will retry after 2.042126107s: waiting for domain to come up
	I0122 21:16:58.014810  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:16:58.015346  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:16:58.015366  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:16:58.015312  304694 retry.go:31] will retry after 2.056121344s: waiting for domain to come up
	I0122 21:17:00.074669  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:00.075347  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:17:00.075441  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:17:00.075310  304694 retry.go:31] will retry after 3.107597415s: waiting for domain to come up
	I0122 21:17:03.185665  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:03.186496  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:17:03.186523  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:17:03.186383  304694 retry.go:31] will retry after 3.347091187s: waiting for domain to come up
	I0122 21:17:06.535541  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:06.536212  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:17:06.536243  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:17:06.536172  304694 retry.go:31] will retry after 5.208628484s: waiting for domain to come up
	I0122 21:17:11.747690  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:11.748203  304536 main.go:141] libmachine: (old-k8s-version-181389) found domain IP: 192.168.72.222
	I0122 21:17:11.748227  304536 main.go:141] libmachine: (old-k8s-version-181389) reserving static IP address...
	I0122 21:17:11.748241  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has current primary IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:11.748738  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-181389", mac: "52:54:00:b5:43:94", ip: "192.168.72.222"} in network mk-old-k8s-version-181389
	I0122 21:17:11.876127  304536 main.go:141] libmachine: (old-k8s-version-181389) reserved static IP address 192.168.72.222 for domain old-k8s-version-181389
	I0122 21:17:11.876158  304536 main.go:141] libmachine: (old-k8s-version-181389) waiting for SSH...
	I0122 21:17:11.876180  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | Getting to WaitForSSH function...
	I0122 21:17:11.884279  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:11.887924  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389
	I0122 21:17:11.887958  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find defined IP address of network mk-old-k8s-version-181389 interface with MAC address 52:54:00:b5:43:94
	I0122 21:17:11.888534  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | Using SSH client type: external
	I0122 21:17:11.888560  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa (-rw-------)
	I0122 21:17:11.888590  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:17:11.888604  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | About to run SSH command:
	I0122 21:17:11.888630  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | exit 0
	I0122 21:17:11.894277  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | SSH cmd err, output: exit status 255: 
	I0122 21:17:11.894300  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0122 21:17:11.894308  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | command : exit 0
	I0122 21:17:11.894313  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | err     : exit status 255
	I0122 21:17:11.894321  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | output  : 
	I0122 21:17:14.894975  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | Getting to WaitForSSH function...
	I0122 21:17:14.898335  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:14.898846  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:14.898871  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:14.899171  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | Using SSH client type: external
	I0122 21:17:14.899201  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa (-rw-------)
	I0122 21:17:14.899240  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:17:14.899257  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | About to run SSH command:
	I0122 21:17:14.899270  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | exit 0
	I0122 21:17:15.039833  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | SSH cmd err, output: <nil>: 
	I0122 21:17:15.040031  304536 main.go:141] libmachine: (old-k8s-version-181389) KVM machine creation complete
	I0122 21:17:15.040419  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetConfigRaw
	I0122 21:17:15.041285  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:17:15.041569  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:17:15.041752  304536 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0122 21:17:15.041769  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetState
	I0122 21:17:15.043618  304536 main.go:141] libmachine: Detecting operating system of created instance...
	I0122 21:17:15.043639  304536 main.go:141] libmachine: Waiting for SSH to be available...
	I0122 21:17:15.043645  304536 main.go:141] libmachine: Getting to WaitForSSH function...
	I0122 21:17:15.043652  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:17:15.046829  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.048738  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:15.048766  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.049098  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:17:15.049366  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:15.049569  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:15.049751  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:17:15.049990  304536 main.go:141] libmachine: Using SSH client type: native
	I0122 21:17:15.050301  304536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0122 21:17:15.050321  304536 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0122 21:17:15.199419  304536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:17:15.199448  304536 main.go:141] libmachine: Detecting the provisioner...
	I0122 21:17:15.199461  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:17:15.203213  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.203646  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:15.203705  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.203911  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:17:15.204170  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:15.204388  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:15.204548  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:17:15.204726  304536 main.go:141] libmachine: Using SSH client type: native
	I0122 21:17:15.205047  304536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0122 21:17:15.205120  304536 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0122 21:17:15.332322  304536 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0122 21:17:15.332414  304536 main.go:141] libmachine: found compatible host: buildroot
	I0122 21:17:15.332426  304536 main.go:141] libmachine: Provisioning with buildroot...
	I0122 21:17:15.332436  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetMachineName
	I0122 21:17:15.332733  304536 buildroot.go:166] provisioning hostname "old-k8s-version-181389"
	I0122 21:17:15.332765  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetMachineName
	I0122 21:17:15.332967  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:17:15.336523  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.336976  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:15.337015  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.337149  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:17:15.337398  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:15.337589  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:15.337794  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:17:15.337992  304536 main.go:141] libmachine: Using SSH client type: native
	I0122 21:17:15.338224  304536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0122 21:17:15.338243  304536 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-181389 && echo "old-k8s-version-181389" | sudo tee /etc/hostname
	I0122 21:17:15.484507  304536 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-181389
	
	I0122 21:17:15.484555  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:17:15.488091  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.488623  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:15.488673  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.489020  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:17:15.489246  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:15.489448  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:15.489603  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:17:15.489768  304536 main.go:141] libmachine: Using SSH client type: native
	I0122 21:17:15.489971  304536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0122 21:17:15.489988  304536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-181389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-181389/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-181389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:17:15.626917  304536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:17:15.626961  304536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:17:15.627003  304536 buildroot.go:174] setting up certificates
	I0122 21:17:15.627020  304536 provision.go:84] configureAuth start
	I0122 21:17:15.627037  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetMachineName
	I0122 21:17:15.627394  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetIP
	I0122 21:17:15.631234  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.631625  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:15.631663  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.631800  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:17:15.635158  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.635524  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:15.635561  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.635723  304536 provision.go:143] copyHostCerts
	I0122 21:17:15.635779  304536 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:17:15.635821  304536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:17:15.635878  304536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:17:15.635983  304536 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:17:15.635991  304536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:17:15.636012  304536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:17:15.636116  304536 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:17:15.636130  304536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:17:15.636163  304536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:17:15.636257  304536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-181389 san=[127.0.0.1 192.168.72.222 localhost minikube old-k8s-version-181389]
	I0122 21:17:15.795140  304536 provision.go:177] copyRemoteCerts
	I0122 21:17:15.795245  304536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:17:15.795285  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:17:15.799036  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.799570  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:15.799608  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:15.799862  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:17:15.800112  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:15.800276  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:17:15.800452  304536 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa Username:docker}
	I0122 21:17:15.899129  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0122 21:17:15.937490  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:17:15.974966  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0122 21:17:16.008149  304536 provision.go:87] duration metric: took 381.112599ms to configureAuth
	I0122 21:17:16.008184  304536 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:17:16.008382  304536 config.go:182] Loaded profile config "old-k8s-version-181389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0122 21:17:16.008490  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:17:16.011852  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.012280  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:16.012330  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.012490  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:17:16.012738  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:16.012942  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:16.013104  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:17:16.013290  304536 main.go:141] libmachine: Using SSH client type: native
	I0122 21:17:16.013536  304536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0122 21:17:16.013555  304536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:17:16.290392  304536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:17:16.290430  304536 main.go:141] libmachine: Checking connection to Docker...
	I0122 21:17:16.290442  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetURL
	I0122 21:17:16.292048  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | using libvirt version 6000000
	I0122 21:17:16.295193  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.295575  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:16.295607  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.295808  304536 main.go:141] libmachine: Docker is up and running!
	I0122 21:17:16.295827  304536 main.go:141] libmachine: Reticulating splines...
	I0122 21:17:16.295836  304536 client.go:171] duration metric: took 28.47669619s to LocalClient.Create
	I0122 21:17:16.295874  304536 start.go:167] duration metric: took 28.476788025s to libmachine.API.Create "old-k8s-version-181389"
	I0122 21:17:16.295889  304536 start.go:293] postStartSetup for "old-k8s-version-181389" (driver="kvm2")
	I0122 21:17:16.295903  304536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:17:16.295940  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:17:16.296204  304536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:17:16.296238  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:17:16.298918  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.299243  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:16.299323  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.299522  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:17:16.299762  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:16.299970  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:17:16.300129  304536 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa Username:docker}
	I0122 21:17:16.395130  304536 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:17:16.401957  304536 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:17:16.401999  304536 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:17:16.402076  304536 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:17:16.402167  304536 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:17:16.402313  304536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:17:16.413754  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:17:16.448163  304536 start.go:296] duration metric: took 152.253145ms for postStartSetup
	I0122 21:17:16.448226  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetConfigRaw
	I0122 21:17:16.448954  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetIP
	I0122 21:17:16.452289  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.452766  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:16.452815  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.453336  304536 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/config.json ...
	I0122 21:17:16.453657  304536 start.go:128] duration metric: took 28.660581462s to createHost
	I0122 21:17:16.453701  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:17:16.457311  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.457881  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:16.457907  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.457971  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:17:16.458325  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:16.458863  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:16.459156  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:17:16.459393  304536 main.go:141] libmachine: Using SSH client type: native
	I0122 21:17:16.459635  304536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0122 21:17:16.459646  304536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:17:16.588922  304536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737580636.572176845
	
	I0122 21:17:16.588953  304536 fix.go:216] guest clock: 1737580636.572176845
	I0122 21:17:16.588965  304536 fix.go:229] Guest: 2025-01-22 21:17:16.572176845 +0000 UTC Remote: 2025-01-22 21:17:16.45367454 +0000 UTC m=+49.153437625 (delta=118.502305ms)
	I0122 21:17:16.589028  304536 fix.go:200] guest clock delta is within tolerance: 118.502305ms
	I0122 21:17:16.589037  304536 start.go:83] releasing machines lock for "old-k8s-version-181389", held for 28.796231779s
	I0122 21:17:16.589060  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:17:16.589422  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetIP
	I0122 21:17:16.592303  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.592761  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:16.592799  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.593038  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:17:16.593637  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:17:16.593888  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:17:16.593987  304536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:17:16.594035  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:17:16.594092  304536 ssh_runner.go:195] Run: cat /version.json
	I0122 21:17:16.594114  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:17:16.597070  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.597410  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:16.597435  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.597457  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.597650  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:17:16.597872  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:16.597937  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:16.597963  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:16.598103  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:17:16.598276  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:17:16.598308  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:17:16.598388  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:17:16.598427  304536 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa Username:docker}
	I0122 21:17:16.598735  304536 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa Username:docker}
	I0122 21:17:16.706523  304536 ssh_runner.go:195] Run: systemctl --version
	I0122 21:17:16.713879  304536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:17:16.882513  304536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:17:16.890470  304536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:17:16.890573  304536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:17:16.911362  304536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:17:16.911395  304536 start.go:495] detecting cgroup driver to use...
	I0122 21:17:16.911460  304536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:17:16.932446  304536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:17:16.949465  304536 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:17:16.949549  304536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:17:16.965646  304536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:17:16.982025  304536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:17:17.129877  304536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:17:17.311898  304536 docker.go:233] disabling docker service ...
	I0122 21:17:17.311983  304536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:17:17.330302  304536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:17:17.347620  304536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:17:17.491794  304536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:17:17.655194  304536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:17:17.673581  304536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:17:17.700256  304536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0122 21:17:17.700348  304536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:17:17.714252  304536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:17:17.714327  304536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:17:17.727852  304536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:17:17.741395  304536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:17:17.754766  304536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:17:17.777800  304536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:17:17.790905  304536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:17:17.790985  304536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:17:17.809281  304536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:17:17.822402  304536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:17:17.966612  304536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:17:18.077210  304536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:17:18.077294  304536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:17:18.083933  304536 start.go:563] Will wait 60s for crictl version
	I0122 21:17:18.084016  304536 ssh_runner.go:195] Run: which crictl
	I0122 21:17:18.091597  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:17:18.144214  304536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:17:18.144311  304536 ssh_runner.go:195] Run: crio --version
	I0122 21:17:18.187292  304536 ssh_runner.go:195] Run: crio --version
	I0122 21:17:18.248003  304536 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0122 21:17:18.249430  304536 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetIP
	I0122 21:17:18.253325  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:18.253797  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:17:06 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:17:18.253824  304536 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:17:18.254151  304536 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0122 21:17:18.263503  304536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:17:18.280366  304536 kubeadm.go:883] updating cluster {Name:old-k8s-version-181389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-181389 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:17:18.280539  304536 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0122 21:17:18.280608  304536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:17:18.326217  304536 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0122 21:17:18.326311  304536 ssh_runner.go:195] Run: which lz4
	I0122 21:17:18.331912  304536 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:17:18.337539  304536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:17:18.337581  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0122 21:17:20.399805  304536 crio.go:462] duration metric: took 2.067940708s to copy over tarball
	I0122 21:17:20.399933  304536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:17:23.764618  304536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.364643853s)
	I0122 21:17:23.764654  304536 crio.go:469] duration metric: took 3.364815053s to extract the tarball
	I0122 21:17:23.764665  304536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:17:23.812145  304536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:17:23.870003  304536 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0122 21:17:23.870035  304536 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0122 21:17:23.870162  304536 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0122 21:17:23.870228  304536 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:17:23.870241  304536 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0122 21:17:23.870252  304536 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:17:23.870141  304536 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:17:23.870288  304536 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:17:23.870420  304536 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0122 21:17:23.870145  304536 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:17:23.873080  304536 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0122 21:17:23.872873  304536 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:17:23.873492  304536 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:17:23.873574  304536 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:17:23.873569  304536 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:17:23.874058  304536 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0122 21:17:23.874418  304536 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:17:23.874567  304536 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0122 21:17:24.025404  304536 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:17:24.034373  304536 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:17:24.037543  304536 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0122 21:17:24.043531  304536 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:17:24.047110  304536 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0122 21:17:24.054677  304536 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0122 21:17:24.070571  304536 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:17:24.128681  304536 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0122 21:17:24.128768  304536 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:17:24.128823  304536 ssh_runner.go:195] Run: which crictl
	I0122 21:17:24.272853  304536 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0122 21:17:24.272938  304536 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:17:24.273003  304536 ssh_runner.go:195] Run: which crictl
	I0122 21:17:24.301301  304536 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0122 21:17:24.301357  304536 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0122 21:17:24.301398  304536 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0122 21:17:24.301413  304536 ssh_runner.go:195] Run: which crictl
	I0122 21:17:24.301441  304536 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:17:24.301493  304536 ssh_runner.go:195] Run: which crictl
	I0122 21:17:24.319314  304536 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0122 21:17:24.319383  304536 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0122 21:17:24.319394  304536 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0122 21:17:24.319439  304536 ssh_runner.go:195] Run: which crictl
	I0122 21:17:24.319460  304536 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0122 21:17:24.319504  304536 ssh_runner.go:195] Run: which crictl
	I0122 21:17:24.320312  304536 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0122 21:17:24.320356  304536 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:17:24.320406  304536 ssh_runner.go:195] Run: which crictl
	I0122 21:17:24.320415  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:17:24.320426  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:17:24.320482  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:17:24.320488  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0122 21:17:24.326734  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0122 21:17:24.329181  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0122 21:17:24.515143  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:17:24.515395  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:17:24.517156  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:17:24.517328  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0122 21:17:24.517377  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:17:24.536018  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0122 21:17:24.536138  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0122 21:17:24.734423  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:17:24.734509  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:17:24.747554  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:17:24.752447  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0122 21:17:24.752684  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:17:24.761897  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0122 21:17:24.796563  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0122 21:17:24.819399  304536 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:17:24.941138  304536 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0122 21:17:24.941392  304536 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:17:24.941529  304536 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0122 21:17:24.978731  304536 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0122 21:17:24.997560  304536 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0122 21:17:24.997707  304536 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0122 21:17:25.033706  304536 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0122 21:17:25.156145  304536 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0122 21:17:25.156221  304536 cache_images.go:92] duration metric: took 1.286170258s to LoadCachedImages
	W0122 21:17:25.156342  304536 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0122 21:17:25.156368  304536 kubeadm.go:934] updating node { 192.168.72.222 8443 v1.20.0 crio true true} ...
	I0122 21:17:25.156525  304536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-181389 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-181389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:17:25.156624  304536 ssh_runner.go:195] Run: crio config
	I0122 21:17:25.217648  304536 cni.go:84] Creating CNI manager for ""
	I0122 21:17:25.217677  304536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:17:25.217693  304536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 21:17:25.217721  304536 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.222 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-181389 NodeName:old-k8s-version-181389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0122 21:17:25.217889  304536 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-181389"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:17:25.217974  304536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0122 21:17:25.231722  304536 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:17:25.231811  304536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:17:25.244863  304536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0122 21:17:25.267389  304536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:17:25.289407  304536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0122 21:17:25.311922  304536 ssh_runner.go:195] Run: grep 192.168.72.222	control-plane.minikube.internal$ /etc/hosts
	I0122 21:17:25.316889  304536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:17:25.335522  304536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:17:25.509646  304536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:17:25.541081  304536 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389 for IP: 192.168.72.222
	I0122 21:17:25.541111  304536 certs.go:194] generating shared ca certs ...
	I0122 21:17:25.541135  304536 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:17:25.541330  304536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:17:25.541386  304536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:17:25.541401  304536 certs.go:256] generating profile certs ...
	I0122 21:17:25.541482  304536 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/client.key
	I0122 21:17:25.541522  304536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/client.crt with IP's: []
	I0122 21:17:26.075736  304536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/client.crt ...
	I0122 21:17:26.075772  304536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/client.crt: {Name:mkc9f35fcc4c07ddb8bd87c24744d6fb2ef7839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:17:26.076014  304536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/client.key ...
	I0122 21:17:26.076037  304536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/client.key: {Name:mk61377e20df402fd6d74ed6d57089d2844421cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:17:26.076184  304536 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.key.d562c0b4
	I0122 21:17:26.076206  304536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.crt.d562c0b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.222]
	I0122 21:17:26.160507  304536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.crt.d562c0b4 ...
	I0122 21:17:26.160544  304536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.crt.d562c0b4: {Name:mkb09216c78b824220e990418793ad45cced5931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:17:26.160722  304536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.key.d562c0b4 ...
	I0122 21:17:26.160744  304536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.key.d562c0b4: {Name:mk20010e321604122a36c351cb2e8d911cb58953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:17:26.160817  304536 certs.go:381] copying /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.crt.d562c0b4 -> /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.crt
	I0122 21:17:26.160890  304536 certs.go:385] copying /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.key.d562c0b4 -> /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.key
	I0122 21:17:26.160941  304536 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/proxy-client.key
	I0122 21:17:26.160956  304536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/proxy-client.crt with IP's: []
	I0122 21:17:26.232357  304536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/proxy-client.crt ...
	I0122 21:17:26.232403  304536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/proxy-client.crt: {Name:mk40443e5ba6e7f61b84a88dde558ee528b489c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:17:26.232625  304536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/proxy-client.key ...
	I0122 21:17:26.232642  304536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/proxy-client.key: {Name:mkc13e938477728d8922b6ddecdaf90fefa327f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:17:26.232854  304536 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:17:26.232910  304536 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:17:26.232921  304536 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:17:26.232945  304536 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:17:26.232974  304536 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:17:26.232996  304536 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:17:26.233037  304536 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:17:26.234623  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:17:26.278315  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:17:26.315894  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:17:26.354888  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:17:26.392974  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0122 21:17:26.424467  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0122 21:17:26.458873  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:17:26.495466  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0122 21:17:26.534607  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:17:26.569517  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:17:26.616150  304536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:17:26.680183  304536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:17:26.709607  304536 ssh_runner.go:195] Run: openssl version
	I0122 21:17:26.719453  304536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:17:26.735805  304536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:17:26.742635  304536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:17:26.742729  304536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:17:26.751511  304536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:17:26.767892  304536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:17:26.780990  304536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:17:26.786508  304536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:17:26.786599  304536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:17:26.793407  304536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:17:26.808659  304536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:17:26.825199  304536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:17:26.831491  304536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:17:26.831566  304536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:17:26.840679  304536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:17:26.854465  304536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:17:26.859767  304536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0122 21:17:26.859842  304536 kubeadm.go:392] StartCluster: {Name:old-k8s-version-181389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-181389 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:17:26.859971  304536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:17:26.860055  304536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:17:26.911794  304536 cri.go:89] found id: ""
	I0122 21:17:26.911890  304536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:17:26.926404  304536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:17:26.941289  304536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:17:26.954235  304536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:17:26.954264  304536 kubeadm.go:157] found existing configuration files:
	
	I0122 21:17:26.954390  304536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:17:26.966489  304536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:17:26.966576  304536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:17:26.978870  304536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:17:26.990805  304536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:17:26.990883  304536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:17:27.003863  304536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:17:27.016484  304536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:17:27.016566  304536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:17:27.030647  304536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:17:27.041592  304536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:17:27.041670  304536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:17:27.055297  304536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:17:27.239648  304536 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:17:27.240214  304536 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:17:27.437284  304536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:17:27.437463  304536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:17:27.437619  304536 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:17:27.715394  304536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:17:27.716905  304536 out.go:235]   - Generating certificates and keys ...
	I0122 21:17:27.717043  304536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:17:27.717132  304536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:17:28.116488  304536 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0122 21:17:28.444981  304536 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0122 21:17:28.578224  304536 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0122 21:17:28.730071  304536 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0122 21:17:28.940509  304536 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0122 21:17:28.940837  304536 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-181389] and IPs [192.168.72.222 127.0.0.1 ::1]
	I0122 21:17:29.059570  304536 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0122 21:17:29.059755  304536 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-181389] and IPs [192.168.72.222 127.0.0.1 ::1]
	I0122 21:17:29.356408  304536 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0122 21:17:29.484675  304536 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0122 21:17:29.562860  304536 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0122 21:17:29.563213  304536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:17:29.694824  304536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:17:29.861858  304536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:17:30.135602  304536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:17:30.464432  304536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:17:30.494094  304536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:17:30.494269  304536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:17:30.494326  304536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:17:30.744236  304536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:17:30.746154  304536 out.go:235]   - Booting up control plane ...
	I0122 21:17:30.746316  304536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:17:30.755068  304536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:17:30.761008  304536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:17:30.763141  304536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:17:30.781658  304536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:18:10.780095  304536 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:18:10.781352  304536 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:18:10.781652  304536 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:18:15.781901  304536 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:18:15.782169  304536 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:18:25.782964  304536 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:18:25.783235  304536 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:18:45.784276  304536 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:18:45.784537  304536 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:19:25.784504  304536 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:19:25.784744  304536 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:19:25.784765  304536 kubeadm.go:310] 
	I0122 21:19:25.784803  304536 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:19:25.784862  304536 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:19:25.784910  304536 kubeadm.go:310] 
	I0122 21:19:25.784962  304536 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:19:25.785001  304536 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:19:25.785117  304536 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:19:25.785128  304536 kubeadm.go:310] 
	I0122 21:19:25.785219  304536 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:19:25.785254  304536 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:19:25.785282  304536 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:19:25.785289  304536 kubeadm.go:310] 
	I0122 21:19:25.785390  304536 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:19:25.785477  304536 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:19:25.785484  304536 kubeadm.go:310] 
	I0122 21:19:25.785576  304536 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:19:25.785660  304536 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:19:25.785777  304536 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:19:25.785866  304536 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:19:25.785881  304536 kubeadm.go:310] 
	I0122 21:19:25.786851  304536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:19:25.786977  304536 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:19:25.787128  304536 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0122 21:19:25.787272  304536 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-181389] and IPs [192.168.72.222 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-181389] and IPs [192.168.72.222 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-181389] and IPs [192.168.72.222 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-181389] and IPs [192.168.72.222 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0122 21:19:25.787338  304536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:19:28.424734  304536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.637361929s)
	I0122 21:19:28.424833  304536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:19:28.442203  304536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:19:28.455536  304536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:19:28.455575  304536 kubeadm.go:157] found existing configuration files:
	
	I0122 21:19:28.455637  304536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:19:28.468998  304536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:19:28.469071  304536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:19:28.481395  304536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:19:28.494031  304536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:19:28.494123  304536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:19:28.506558  304536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:19:28.517989  304536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:19:28.518080  304536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:19:28.530049  304536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:19:28.543830  304536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:19:28.543908  304536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:19:28.556324  304536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:19:28.809426  304536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:21:24.937389  304536 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:21:24.937509  304536 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:21:24.940091  304536 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:21:24.940181  304536 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:21:24.940282  304536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:21:24.940410  304536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:21:24.940540  304536 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:21:24.940622  304536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:21:24.942617  304536 out.go:235]   - Generating certificates and keys ...
	I0122 21:21:24.942744  304536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:21:24.942854  304536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:21:24.943001  304536 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:21:24.943110  304536 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:21:24.943219  304536 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:21:24.943316  304536 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:21:24.943419  304536 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:21:24.943524  304536 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:21:24.943636  304536 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:21:24.943727  304536 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:21:24.943781  304536 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:21:24.943887  304536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:21:24.943969  304536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:21:24.944017  304536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:21:24.944086  304536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:21:24.944158  304536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:21:24.944298  304536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:21:24.944414  304536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:21:24.944478  304536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:21:24.944559  304536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:21:24.946371  304536 out.go:235]   - Booting up control plane ...
	I0122 21:21:24.946500  304536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:21:24.946614  304536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:21:24.946703  304536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:21:24.946855  304536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:21:24.947113  304536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:21:24.947177  304536 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:21:24.947263  304536 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:21:24.947498  304536 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:21:24.947597  304536 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:21:24.947744  304536 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:21:24.947808  304536 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:21:24.948015  304536 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:21:24.948097  304536 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:21:24.948268  304536 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:21:24.948324  304536 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:21:24.948538  304536 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:21:24.948554  304536 kubeadm.go:310] 
	I0122 21:21:24.948605  304536 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:21:24.948650  304536 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:21:24.948657  304536 kubeadm.go:310] 
	I0122 21:21:24.948702  304536 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:21:24.948748  304536 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:21:24.948882  304536 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:21:24.948897  304536 kubeadm.go:310] 
	I0122 21:21:24.949016  304536 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:21:24.949067  304536 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:21:24.949109  304536 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:21:24.949119  304536 kubeadm.go:310] 
	I0122 21:21:24.949251  304536 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:21:24.949366  304536 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:21:24.949377  304536 kubeadm.go:310] 
	I0122 21:21:24.949512  304536 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:21:24.949618  304536 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:21:24.949716  304536 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:21:24.949775  304536 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:21:24.949854  304536 kubeadm.go:394] duration metric: took 3m58.090019905s to StartCluster
	I0122 21:21:24.949907  304536 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:21:24.949965  304536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:21:24.950021  304536 kubeadm.go:310] 
	I0122 21:21:25.023254  304536 cri.go:89] found id: ""
	I0122 21:21:25.023292  304536 logs.go:282] 0 containers: []
	W0122 21:21:25.023304  304536 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:21:25.023313  304536 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:21:25.023396  304536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:21:25.073210  304536 cri.go:89] found id: ""
	I0122 21:21:25.073247  304536 logs.go:282] 0 containers: []
	W0122 21:21:25.073260  304536 logs.go:284] No container was found matching "etcd"
	I0122 21:21:25.073272  304536 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:21:25.073346  304536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:21:25.123863  304536 cri.go:89] found id: ""
	I0122 21:21:25.123896  304536 logs.go:282] 0 containers: []
	W0122 21:21:25.123906  304536 logs.go:284] No container was found matching "coredns"
	I0122 21:21:25.123915  304536 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:21:25.123985  304536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:21:25.171334  304536 cri.go:89] found id: ""
	I0122 21:21:25.171376  304536 logs.go:282] 0 containers: []
	W0122 21:21:25.171389  304536 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:21:25.171397  304536 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:21:25.171471  304536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:21:25.226280  304536 cri.go:89] found id: ""
	I0122 21:21:25.226312  304536 logs.go:282] 0 containers: []
	W0122 21:21:25.226323  304536 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:21:25.226331  304536 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:21:25.226421  304536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:21:25.275261  304536 cri.go:89] found id: ""
	I0122 21:21:25.275343  304536 logs.go:282] 0 containers: []
	W0122 21:21:25.275364  304536 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:21:25.275379  304536 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:21:25.275460  304536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:21:25.324681  304536 cri.go:89] found id: ""
	I0122 21:21:25.324713  304536 logs.go:282] 0 containers: []
	W0122 21:21:25.324723  304536 logs.go:284] No container was found matching "kindnet"
	I0122 21:21:25.324750  304536 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:21:25.324767  304536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:21:25.465350  304536 logs.go:123] Gathering logs for container status ...
	I0122 21:21:25.465395  304536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:21:25.533076  304536 logs.go:123] Gathering logs for kubelet ...
	I0122 21:21:25.533126  304536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:21:25.604874  304536 logs.go:123] Gathering logs for dmesg ...
	I0122 21:21:25.604930  304536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:21:25.624144  304536 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:21:25.624194  304536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:21:25.788127  304536 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0122 21:21:25.788181  304536 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0122 21:21:25.788250  304536 out.go:270] * 
	* 
	W0122 21:21:25.788322  304536 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:21:25.788342  304536 out.go:270] * 
	* 
	W0122 21:21:25.789271  304536 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 21:21:25.792485  304536 out.go:201] 
	W0122 21:21:25.793868  304536 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:21:25.793941  304536 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0122 21:21:25.793969  304536 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0122 21:21:25.795596  304536 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-181389 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 6 (275.470658ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0122 21:21:26.129361  311785 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-181389" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-181389" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (298.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (1620.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-806477 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0122 21:20:28.823006  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:30.980694  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:31.593541  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:50.257744  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:52.075171  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:56.422058  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-806477 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (26m57.603621834s)

                                                
                                                
-- stdout --
	* [no-preload-806477] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-806477" primary control-plane node in "no-preload-806477" cluster
	* Restarting existing kvm2 VM for "no-preload-806477" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-806477 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 21:20:23.281513  311280 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:20:23.281780  311280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:20:23.281789  311280 out.go:358] Setting ErrFile to fd 2...
	I0122 21:20:23.281793  311280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:20:23.281988  311280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:20:23.282666  311280 out.go:352] Setting JSON to false
	I0122 21:20:23.283766  311280 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":14569,"bootTime":1737566254,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:20:23.283893  311280 start.go:139] virtualization: kvm guest
	I0122 21:20:23.286240  311280 out.go:177] * [no-preload-806477] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:20:23.287641  311280 notify.go:220] Checking for updates...
	I0122 21:20:23.287668  311280 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:20:23.289201  311280 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:20:23.290623  311280 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:20:23.291917  311280 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:20:23.293092  311280 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:20:23.294278  311280 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:20:23.295961  311280 config.go:182] Loaded profile config "no-preload-806477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:20:23.296400  311280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:20:23.296476  311280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:20:23.312843  311280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I0122 21:20:23.313391  311280 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:20:23.314010  311280 main.go:141] libmachine: Using API Version  1
	I0122 21:20:23.314036  311280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:20:23.314486  311280 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:20:23.314683  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:20:23.314952  311280 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:20:23.315280  311280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:20:23.315336  311280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:20:23.332007  311280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0122 21:20:23.332652  311280 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:20:23.333299  311280 main.go:141] libmachine: Using API Version  1
	I0122 21:20:23.333335  311280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:20:23.333736  311280 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:20:23.333944  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:20:23.374425  311280 out.go:177] * Using the kvm2 driver based on existing profile
	I0122 21:20:23.375946  311280 start.go:297] selected driver: kvm2
	I0122 21:20:23.375976  311280 start.go:901] validating driver "kvm2" against &{Name:no-preload-806477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-806477 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:20:23.376117  311280 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:20:23.376855  311280 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:20:23.376956  311280 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:20:23.393933  311280 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:20:23.394451  311280 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:20:23.394508  311280 cni.go:84] Creating CNI manager for ""
	I0122 21:20:23.394567  311280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:20:23.394623  311280 start.go:340] cluster config:
	{Name:no-preload-806477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-806477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:20:23.394771  311280 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:20:23.396940  311280 out.go:177] * Starting "no-preload-806477" primary control-plane node in "no-preload-806477" cluster
	I0122 21:20:23.398349  311280 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:20:23.398537  311280 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/no-preload-806477/config.json ...
	I0122 21:20:23.398605  311280 cache.go:107] acquiring lock: {Name:mk234e6c8d0aec969b7f7f65166f4e620f46c117 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:20:23.398647  311280 cache.go:107] acquiring lock: {Name:mk397e5f5201b81f8b7c6359c747b1fa9c3437ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:20:23.398644  311280 cache.go:107] acquiring lock: {Name:mkfe096d6449cd1eb860e923b85a8db7eb52718b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:20:23.398596  311280 cache.go:107] acquiring lock: {Name:mkc1d63cc92a2ebacbaff81976fa5db8c35e43b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:20:23.398598  311280 cache.go:107] acquiring lock: {Name:mk41f18986fc20cbaea293f8bfa24d9f308e6607 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:20:23.398709  311280 cache.go:107] acquiring lock: {Name:mkdc42476f6187964f4660a072387def10c72c55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:20:23.398742  311280 cache.go:115] /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0122 21:20:23.398755  311280 cache.go:115] /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0122 21:20:23.398776  311280 cache.go:115] /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0122 21:20:23.398763  311280 cache.go:107] acquiring lock: {Name:mk4a711b69d04604905954684c20ac6d97bcb61c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:20:23.398805  311280 cache.go:115] /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0122 21:20:23.398825  311280 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 244.151µs
	I0122 21:20:23.398839  311280 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0122 21:20:23.398818  311280 cache.go:107] acquiring lock: {Name:mka6bd2b86bb77951fc011ce5d4e7f2fb80f54ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:20:23.398858  311280 start.go:360] acquireMachinesLock for no-preload-806477: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:20:23.398904  311280 cache.go:115] /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0122 21:20:23.398776  311280 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 128.492µs
	I0122 21:20:23.398928  311280 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0122 21:20:23.398927  311280 start.go:364] duration metric: took 54.82µs to acquireMachinesLock for "no-preload-806477"
	I0122 21:20:23.398920  311280 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 210.034µs
	I0122 21:20:23.398943  311280 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0122 21:20:23.398793  311280 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 211.194µs
	I0122 21:20:23.398976  311280 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0122 21:20:23.398952  311280 cache.go:115] /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0122 21:20:23.398992  311280 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 226.381µs
	I0122 21:20:23.399004  311280 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0122 21:20:23.398756  311280 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 156.31µs
	I0122 21:20:23.399013  311280 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0122 21:20:23.398761  311280 cache.go:115] /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0122 21:20:23.399028  311280 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 386.465µs
	I0122 21:20:23.399039  311280 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0122 21:20:23.398801  311280 cache.go:115] /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0122 21:20:23.399049  311280 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 375.656µs
	I0122 21:20:23.399055  311280 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0122 21:20:23.399061  311280 cache.go:87] Successfully saved all images to host disk.
	I0122 21:20:23.398955  311280 start.go:96] Skipping create...Using existing machine configuration
	I0122 21:20:23.399084  311280 fix.go:54] fixHost starting: 
	I0122 21:20:23.399419  311280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:20:23.399454  311280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:20:23.415620  311280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32921
	I0122 21:20:23.416186  311280 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:20:23.416774  311280 main.go:141] libmachine: Using API Version  1
	I0122 21:20:23.416800  311280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:20:23.417210  311280 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:20:23.417476  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:20:23.417640  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetState
	I0122 21:20:23.419694  311280 fix.go:112] recreateIfNeeded on no-preload-806477: state=Stopped err=<nil>
	I0122 21:20:23.419745  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	W0122 21:20:23.419976  311280 fix.go:138] unexpected machine state, will restart: <nil>
	I0122 21:20:23.422148  311280 out.go:177] * Restarting existing kvm2 VM for "no-preload-806477" ...
	I0122 21:20:23.423795  311280 main.go:141] libmachine: (no-preload-806477) Calling .Start
	I0122 21:20:23.424186  311280 main.go:141] libmachine: (no-preload-806477) starting domain...
	I0122 21:20:23.424214  311280 main.go:141] libmachine: (no-preload-806477) ensuring networks are active...
	I0122 21:20:23.425168  311280 main.go:141] libmachine: (no-preload-806477) Ensuring network default is active
	I0122 21:20:23.425632  311280 main.go:141] libmachine: (no-preload-806477) Ensuring network mk-no-preload-806477 is active
	I0122 21:20:23.426116  311280 main.go:141] libmachine: (no-preload-806477) getting domain XML...
	I0122 21:20:23.427081  311280 main.go:141] libmachine: (no-preload-806477) creating domain...
	I0122 21:20:24.719389  311280 main.go:141] libmachine: (no-preload-806477) waiting for IP...
	I0122 21:20:24.720344  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:24.720834  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:24.720938  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:24.720829  311316 retry.go:31] will retry after 276.56727ms: waiting for domain to come up
	I0122 21:20:24.999470  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:25.000111  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:25.000139  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:25.000080  311316 retry.go:31] will retry after 293.712285ms: waiting for domain to come up
	I0122 21:20:25.295839  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:25.296367  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:25.296410  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:25.296336  311316 retry.go:31] will retry after 313.626926ms: waiting for domain to come up
	I0122 21:20:25.612124  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:25.612889  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:25.612923  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:25.612848  311316 retry.go:31] will retry after 461.690113ms: waiting for domain to come up
	I0122 21:20:26.076645  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:26.077350  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:26.077395  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:26.077285  311316 retry.go:31] will retry after 665.741669ms: waiting for domain to come up
	I0122 21:20:26.745157  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:26.745710  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:26.745728  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:26.745680  311316 retry.go:31] will retry after 573.867942ms: waiting for domain to come up
	I0122 21:20:27.321930  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:27.322567  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:27.322598  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:27.322541  311316 retry.go:31] will retry after 1.11166855s: waiting for domain to come up
	I0122 21:20:28.435804  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:28.436346  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:28.436375  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:28.436303  311316 retry.go:31] will retry after 1.008717524s: waiting for domain to come up
	I0122 21:20:29.446514  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:29.447216  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:29.447254  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:29.447113  311316 retry.go:31] will retry after 1.362835357s: waiting for domain to come up
	I0122 21:20:30.811710  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:30.812228  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:30.812263  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:30.812153  311316 retry.go:31] will retry after 1.825022546s: waiting for domain to come up
	I0122 21:20:32.639233  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:32.639823  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:32.639857  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:32.639779  311316 retry.go:31] will retry after 2.159281749s: waiting for domain to come up
	I0122 21:20:34.801268  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:34.801867  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:34.801925  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:34.801849  311316 retry.go:31] will retry after 2.59014491s: waiting for domain to come up
	I0122 21:20:37.395434  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:37.396097  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:37.396133  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:37.396043  311316 retry.go:31] will retry after 2.910730366s: waiting for domain to come up
	I0122 21:20:40.309925  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:40.310462  311280 main.go:141] libmachine: (no-preload-806477) DBG | unable to find current IP address of domain no-preload-806477 in network mk-no-preload-806477
	I0122 21:20:40.310487  311280 main.go:141] libmachine: (no-preload-806477) DBG | I0122 21:20:40.310434  311316 retry.go:31] will retry after 4.36326873s: waiting for domain to come up
	I0122 21:20:44.675788  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:44.676350  311280 main.go:141] libmachine: (no-preload-806477) found domain IP: 192.168.39.10
	I0122 21:20:44.676373  311280 main.go:141] libmachine: (no-preload-806477) reserving static IP address...
	I0122 21:20:44.676402  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has current primary IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:44.676852  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "no-preload-806477", mac: "52:54:00:66:52:4a", ip: "192.168.39.10"} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:44.676886  311280 main.go:141] libmachine: (no-preload-806477) reserved static IP address 192.168.39.10 for domain no-preload-806477
	I0122 21:20:44.676907  311280 main.go:141] libmachine: (no-preload-806477) DBG | skip adding static IP to network mk-no-preload-806477 - found existing host DHCP lease matching {name: "no-preload-806477", mac: "52:54:00:66:52:4a", ip: "192.168.39.10"}
	I0122 21:20:44.676916  311280 main.go:141] libmachine: (no-preload-806477) waiting for SSH...
	I0122 21:20:44.676932  311280 main.go:141] libmachine: (no-preload-806477) DBG | Getting to WaitForSSH function...
	I0122 21:20:44.679225  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:44.679559  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:44.679618  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:44.679711  311280 main.go:141] libmachine: (no-preload-806477) DBG | Using SSH client type: external
	I0122 21:20:44.679742  311280 main.go:141] libmachine: (no-preload-806477) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/no-preload-806477/id_rsa (-rw-------)
	I0122 21:20:44.679784  311280 main.go:141] libmachine: (no-preload-806477) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/no-preload-806477/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:20:44.679809  311280 main.go:141] libmachine: (no-preload-806477) DBG | About to run SSH command:
	I0122 21:20:44.679824  311280 main.go:141] libmachine: (no-preload-806477) DBG | exit 0
	I0122 21:20:44.806807  311280 main.go:141] libmachine: (no-preload-806477) DBG | SSH cmd err, output: <nil>: 
	I0122 21:20:44.807203  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetConfigRaw
	I0122 21:20:44.807983  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetIP
	I0122 21:20:44.810987  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:44.811372  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:44.811410  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:44.811727  311280 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/no-preload-806477/config.json ...
	I0122 21:20:44.811975  311280 machine.go:93] provisionDockerMachine start ...
	I0122 21:20:44.811997  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:20:44.812275  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:20:44.814779  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:44.815101  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:44.815129  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:44.815278  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:20:44.815514  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:44.815697  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:44.815815  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:20:44.815995  311280 main.go:141] libmachine: Using SSH client type: native
	I0122 21:20:44.816194  311280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0122 21:20:44.816205  311280 main.go:141] libmachine: About to run SSH command:
	hostname
	I0122 21:20:44.923234  311280 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0122 21:20:44.923271  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetMachineName
	I0122 21:20:44.923604  311280 buildroot.go:166] provisioning hostname "no-preload-806477"
	I0122 21:20:44.923641  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetMachineName
	I0122 21:20:44.923844  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:20:44.926969  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:44.927422  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:44.927458  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:44.927579  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:20:44.927830  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:44.928004  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:44.928147  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:20:44.928296  311280 main.go:141] libmachine: Using SSH client type: native
	I0122 21:20:44.928529  311280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0122 21:20:44.928541  311280 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-806477 && echo "no-preload-806477" | sudo tee /etc/hostname
	I0122 21:20:45.061430  311280 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-806477
	
	I0122 21:20:45.061460  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:20:45.064210  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:45.064626  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:45.064671  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:45.064936  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:20:45.065153  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:45.065333  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:45.065493  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:20:45.065672  311280 main.go:141] libmachine: Using SSH client type: native
	I0122 21:20:45.065882  311280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0122 21:20:45.065909  311280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-806477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-806477/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-806477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:20:45.184647  311280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:20:45.184691  311280 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:20:45.184769  311280 buildroot.go:174] setting up certificates
	I0122 21:20:45.184791  311280 provision.go:84] configureAuth start
	I0122 21:20:45.184812  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetMachineName
	I0122 21:20:45.185131  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetIP
	I0122 21:20:45.188090  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:45.188482  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:45.188522  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:45.188754  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:20:45.191339  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:45.191682  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:45.191707  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:45.191956  311280 provision.go:143] copyHostCerts
	I0122 21:20:45.192019  311280 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:20:45.192049  311280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:20:45.192123  311280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:20:45.192217  311280 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:20:45.192226  311280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:20:45.192250  311280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:20:45.192300  311280 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:20:45.192308  311280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:20:45.192328  311280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:20:45.192375  311280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.no-preload-806477 san=[127.0.0.1 192.168.39.10 localhost minikube no-preload-806477]
	I0122 21:20:45.474350  311280 provision.go:177] copyRemoteCerts
	I0122 21:20:45.474423  311280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:20:45.474454  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:20:45.477201  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:45.477509  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:45.477547  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:45.477751  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:20:45.478007  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:45.478251  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:20:45.478414  311280 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/no-preload-806477/id_rsa Username:docker}
	I0122 21:20:45.566201  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:20:45.595746  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0122 21:20:45.624583  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 21:20:45.652718  311280 provision.go:87] duration metric: took 467.900188ms to configureAuth
	I0122 21:20:45.652764  311280 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:20:45.652990  311280 config.go:182] Loaded profile config "no-preload-806477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:20:45.653083  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:20:45.655810  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:45.656186  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:45.656208  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:45.656447  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:20:45.656664  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:45.656833  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:45.656968  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:20:45.657111  311280 main.go:141] libmachine: Using SSH client type: native
	I0122 21:20:45.657338  311280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0122 21:20:45.657357  311280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:20:45.893550  311280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:20:45.893585  311280 machine.go:96] duration metric: took 1.081595104s to provisionDockerMachine
	I0122 21:20:45.893598  311280 start.go:293] postStartSetup for "no-preload-806477" (driver="kvm2")
	I0122 21:20:45.893608  311280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:20:45.893627  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:20:45.893979  311280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:20:45.894011  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:20:45.897119  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:45.897463  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:45.897496  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:45.897667  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:20:45.897902  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:45.898043  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:20:45.898179  311280 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/no-preload-806477/id_rsa Username:docker}
	I0122 21:20:45.985728  311280 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:20:45.991240  311280 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:20:45.991278  311280 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:20:45.991349  311280 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:20:45.991418  311280 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:20:45.991512  311280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:20:46.002779  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:20:46.035153  311280 start.go:296] duration metric: took 141.536851ms for postStartSetup
	I0122 21:20:46.035206  311280 fix.go:56] duration metric: took 22.636124495s for fixHost
	I0122 21:20:46.035230  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:20:46.038293  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:46.038747  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:46.038786  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:46.038966  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:20:46.039213  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:46.039380  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:46.039523  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:20:46.039701  311280 main.go:141] libmachine: Using SSH client type: native
	I0122 21:20:46.039921  311280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0122 21:20:46.039935  311280 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:20:46.147608  311280 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737580846.100621385
	
	I0122 21:20:46.147638  311280 fix.go:216] guest clock: 1737580846.100621385
	I0122 21:20:46.147646  311280 fix.go:229] Guest: 2025-01-22 21:20:46.100621385 +0000 UTC Remote: 2025-01-22 21:20:46.0352103 +0000 UTC m=+22.796724483 (delta=65.411085ms)
	I0122 21:20:46.147704  311280 fix.go:200] guest clock delta is within tolerance: 65.411085ms
	I0122 21:20:46.147709  311280 start.go:83] releasing machines lock for "no-preload-806477", held for 22.748771135s
	I0122 21:20:46.147731  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:20:46.148037  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetIP
	I0122 21:20:46.150852  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:46.151259  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:46.151293  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:46.151483  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:20:46.152094  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:20:46.152307  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:20:46.152374  311280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:20:46.152443  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:20:46.152597  311280 ssh_runner.go:195] Run: cat /version.json
	I0122 21:20:46.152627  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:20:46.155331  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:46.155470  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:46.155719  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:46.155751  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:46.155909  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:46.155943  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:46.155962  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:20:46.156210  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:20:46.156246  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:46.156384  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:20:46.156406  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:20:46.156553  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:20:46.156575  311280 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/no-preload-806477/id_rsa Username:docker}
	I0122 21:20:46.156708  311280 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/no-preload-806477/id_rsa Username:docker}
	I0122 21:20:46.235920  311280 ssh_runner.go:195] Run: systemctl --version
	I0122 21:20:46.259708  311280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:20:46.408293  311280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:20:46.415284  311280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:20:46.415385  311280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:20:46.435141  311280 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:20:46.435171  311280 start.go:495] detecting cgroup driver to use...
	I0122 21:20:46.435252  311280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:20:46.452928  311280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:20:46.468789  311280 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:20:46.468856  311280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:20:46.484348  311280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:20:46.499861  311280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:20:46.620009  311280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:20:46.767056  311280 docker.go:233] disabling docker service ...
	I0122 21:20:46.767155  311280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:20:46.783572  311280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:20:46.798571  311280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:20:46.950959  311280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:20:47.088494  311280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:20:47.104653  311280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:20:47.126314  311280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0122 21:20:47.126398  311280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:20:47.139115  311280 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:20:47.139201  311280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:20:47.152224  311280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:20:47.165110  311280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:20:47.177996  311280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:20:47.191151  311280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:20:47.204571  311280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:20:47.224768  311280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:20:47.238207  311280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:20:47.250158  311280 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:20:47.250258  311280 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:20:47.265486  311280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:20:47.277068  311280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:20:47.410929  311280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:20:47.514015  311280 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:20:47.514117  311280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:20:47.519903  311280 start.go:563] Will wait 60s for crictl version
	I0122 21:20:47.519976  311280 ssh_runner.go:195] Run: which crictl
	I0122 21:20:47.524662  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:20:47.567308  311280 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:20:47.567396  311280 ssh_runner.go:195] Run: crio --version
	I0122 21:20:47.602778  311280 ssh_runner.go:195] Run: crio --version
	I0122 21:20:47.636201  311280 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0122 21:20:47.637646  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetIP
	I0122 21:20:47.640589  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:47.640957  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:20:47.640993  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:20:47.641211  311280 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0122 21:20:47.646059  311280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:20:47.660009  311280 kubeadm.go:883] updating cluster {Name:no-preload-806477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-806477 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:20:47.660170  311280 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:20:47.660213  311280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:20:47.701739  311280 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0122 21:20:47.701770  311280 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.1 registry.k8s.io/kube-controller-manager:v1.32.1 registry.k8s.io/kube-scheduler:v1.32.1 registry.k8s.io/kube-proxy:v1.32.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0122 21:20:47.701819  311280 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:20:47.701844  311280 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0122 21:20:47.701919  311280 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0122 21:20:47.701943  311280 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0122 21:20:47.701978  311280 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.1
	I0122 21:20:47.702007  311280 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.1
	I0122 21:20:47.701929  311280 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0122 21:20:47.701982  311280 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.1
	I0122 21:20:47.703665  311280 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.1
	I0122 21:20:47.703715  311280 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:20:47.703704  311280 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.1
	I0122 21:20:47.703795  311280 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0122 21:20:47.703715  311280 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0122 21:20:47.703815  311280 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0122 21:20:47.703801  311280 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.1
	I0122 21:20:47.703841  311280 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0122 21:20:47.843260  311280 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I0122 21:20:47.850536  311280 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.1
	I0122 21:20:47.855706  311280 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.1
	I0122 21:20:47.856605  311280 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.1
	I0122 21:20:47.866233  311280 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0122 21:20:47.869393  311280 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0122 21:20:47.890741  311280 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.1
	I0122 21:20:47.996439  311280 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I0122 21:20:47.996502  311280 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I0122 21:20:47.996537  311280 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.1" does not exist at hash "019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35" in container runtime
	I0122 21:20:47.996551  311280 ssh_runner.go:195] Run: which crictl
	I0122 21:20:47.996575  311280 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0122 21:20:47.996633  311280 ssh_runner.go:195] Run: which crictl
	I0122 21:20:48.064069  311280 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.1" does not exist at hash "95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a" in container runtime
	I0122 21:20:48.064129  311280 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.1" does not exist at hash "2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1" in container runtime
	I0122 21:20:48.064136  311280 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.1
	I0122 21:20:48.064165  311280 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.1
	I0122 21:20:48.064208  311280 ssh_runner.go:195] Run: which crictl
	I0122 21:20:48.064220  311280 ssh_runner.go:195] Run: which crictl
	I0122 21:20:48.068520  311280 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0122 21:20:48.068584  311280 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0122 21:20:48.068655  311280 ssh_runner.go:195] Run: which crictl
	I0122 21:20:48.087482  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0122 21:20:48.087507  311280 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.1" needs transfer: "registry.k8s.io/kube-proxy:v1.32.1" does not exist at hash "e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a" in container runtime
	I0122 21:20:48.087556  311280 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.1
	I0122 21:20:48.087566  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0122 21:20:48.087586  311280 ssh_runner.go:195] Run: which crictl
	I0122 21:20:48.087643  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0122 21:20:48.087649  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0122 21:20:48.087671  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0122 21:20:48.207205  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0122 21:20:48.207284  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0122 21:20:48.207321  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0122 21:20:48.207383  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0122 21:20:48.207562  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0122 21:20:48.207611  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0122 21:20:48.345631  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0122 21:20:48.353489  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0122 21:20:48.357563  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0122 21:20:48.357619  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0122 21:20:48.362393  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0122 21:20:48.362536  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0122 21:20:48.483493  311280 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0122 21:20:48.483633  311280 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I0122 21:20:48.508234  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0122 21:20:48.508305  311280 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1
	I0122 21:20:48.508332  311280 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0122 21:20:48.508409  311280 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0122 21:20:48.508422  311280 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0122 21:20:48.518414  311280 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1
	I0122 21:20:48.518520  311280 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1
	I0122 21:20:48.518559  311280 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0122 21:20:48.518631  311280 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0122 21:20:48.521063  311280 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I0122 21:20:48.521094  311280 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I0122 21:20:48.521149  311280 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I0122 21:20:48.559420  311280 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.1 (exists)
	I0122 21:20:48.559484  311280 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0122 21:20:48.559497  311280 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.1 (exists)
	I0122 21:20:48.559530  311280 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.1 (exists)
	I0122 21:20:48.559576  311280 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1
	I0122 21:20:48.559693  311280 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.1
	I0122 21:20:48.755254  311280 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:20:52.445768  311280 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.924591743s)
	I0122 21:20:52.445818  311280 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I0122 21:20:52.445845  311280 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0122 21:20:52.445851  311280 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.1: (3.88613028s)
	I0122 21:20:52.445896  311280 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.1 (exists)
	I0122 21:20:52.445905  311280 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0122 21:20:52.445963  311280 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.690665085s)
	I0122 21:20:52.446017  311280 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0122 21:20:52.446054  311280 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:20:52.446111  311280 ssh_runner.go:195] Run: which crictl
	I0122 21:20:55.025131  311280 ssh_runner.go:235] Completed: which crictl: (2.57899514s)
	I0122 21:20:55.025228  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:20:55.025145  311280 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.1: (2.579214338s)
	I0122 21:20:55.025314  311280 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 from cache
	I0122 21:20:55.025356  311280 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0122 21:20:55.025418  311280 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0122 21:20:55.068531  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:20:57.112411  311280 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.043835666s)
	I0122 21:20:57.112531  311280 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:20:57.112414  311280 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.086962584s)
	I0122 21:20:57.112572  311280 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0122 21:20:57.112616  311280 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0122 21:20:57.112676  311280 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0122 21:20:57.155428  311280 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0122 21:20:57.155584  311280 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0122 21:20:58.597246  311280 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.1: (1.484535948s)
	I0122 21:20:58.597288  311280 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 from cache
	I0122 21:20:58.597307  311280 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.441695436s)
	I0122 21:20:58.597325  311280 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0122 21:20:58.597339  311280 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0122 21:20:58.597420  311280 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0122 21:21:00.565803  311280 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.1: (1.968351379s)
	I0122 21:21:00.565839  311280 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 from cache
	I0122 21:21:00.565869  311280 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.1
	I0122 21:21:00.565913  311280 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1
	I0122 21:21:02.941419  311280 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1: (2.375478227s)
	I0122 21:21:02.941463  311280 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 from cache
	I0122 21:21:02.941499  311280 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0122 21:21:02.941557  311280 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0122 21:21:03.701128  311280 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0122 21:21:03.701180  311280 cache_images.go:123] Successfully loaded all cached images
	I0122 21:21:03.701189  311280 cache_images.go:92] duration metric: took 15.99940305s to LoadCachedImages
	I0122 21:21:03.701206  311280 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.32.1 crio true true} ...
	I0122 21:21:03.701352  311280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-806477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-806477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:21:03.701466  311280 ssh_runner.go:195] Run: crio config
	I0122 21:21:03.754255  311280 cni.go:84] Creating CNI manager for ""
	I0122 21:21:03.754283  311280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:21:03.754296  311280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 21:21:03.754328  311280 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-806477 NodeName:no-preload-806477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:21:03.754519  311280 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-806477"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:21:03.754610  311280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:21:03.767468  311280 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:21:03.767566  311280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:21:03.779845  311280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0122 21:21:03.803668  311280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:21:03.823755  311280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0122 21:21:03.845524  311280 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0122 21:21:03.850074  311280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:21:03.865002  311280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:21:04.018170  311280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:21:04.039366  311280 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/no-preload-806477 for IP: 192.168.39.10
	I0122 21:21:04.039400  311280 certs.go:194] generating shared ca certs ...
	I0122 21:21:04.039428  311280 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:21:04.039683  311280 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:21:04.039739  311280 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:21:04.039755  311280 certs.go:256] generating profile certs ...
	I0122 21:21:04.039879  311280 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/no-preload-806477/client.key
	I0122 21:21:04.039997  311280 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/no-preload-806477/apiserver.key.9dbbfa60
	I0122 21:21:04.040063  311280 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/no-preload-806477/proxy-client.key
	I0122 21:21:04.040220  311280 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:21:04.040268  311280 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:21:04.040279  311280 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:21:04.040300  311280 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:21:04.040324  311280 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:21:04.040354  311280 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:21:04.040415  311280 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:21:04.041145  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:21:04.085757  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:21:04.122816  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:21:04.162147  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:21:04.216834  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/no-preload-806477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0122 21:21:04.274440  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/no-preload-806477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0122 21:21:04.305565  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/no-preload-806477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:21:04.336690  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/no-preload-806477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:21:04.371543  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:21:04.400140  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:21:04.429643  311280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:21:04.458312  311280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:21:04.479622  311280 ssh_runner.go:195] Run: openssl version
	I0122 21:21:04.486935  311280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:21:04.500108  311280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:21:04.506408  311280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:21:04.506488  311280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:21:04.514064  311280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:21:04.529097  311280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:21:04.543320  311280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:21:04.549003  311280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:21:04.549108  311280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:21:04.555972  311280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:21:04.569358  311280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:21:04.584507  311280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:21:04.590595  311280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:21:04.590665  311280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:21:04.597631  311280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:21:04.612545  311280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:21:04.618365  311280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 21:21:04.625726  311280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 21:21:04.633206  311280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 21:21:04.640672  311280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 21:21:04.647819  311280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 21:21:04.655388  311280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 21:21:04.662894  311280 kubeadm.go:392] StartCluster: {Name:no-preload-806477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-806477 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:21:04.663000  311280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:21:04.663069  311280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:21:04.712178  311280 cri.go:89] found id: ""
	I0122 21:21:04.712250  311280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:21:04.725628  311280 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0122 21:21:04.725651  311280 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0122 21:21:04.725699  311280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0122 21:21:04.737524  311280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0122 21:21:04.738404  311280 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-806477" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:21:04.738775  311280 kubeconfig.go:62] /home/jenkins/minikube-integration/20288-247142/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-806477" cluster setting kubeconfig missing "no-preload-806477" context setting]
	I0122 21:21:04.739445  311280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:21:04.741501  311280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0122 21:21:04.752904  311280 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.10
	I0122 21:21:04.752952  311280 kubeadm.go:1160] stopping kube-system containers ...
	I0122 21:21:04.752971  311280 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0122 21:21:04.753064  311280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:21:04.794572  311280 cri.go:89] found id: ""
	I0122 21:21:04.794666  311280 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0122 21:21:04.813997  311280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:21:04.825381  311280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:21:04.825402  311280 kubeadm.go:157] found existing configuration files:
	
	I0122 21:21:04.825459  311280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:21:04.838085  311280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:21:04.838154  311280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:21:04.851209  311280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:21:04.864055  311280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:21:04.864137  311280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:21:04.877353  311280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:21:04.889062  311280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:21:04.889151  311280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:21:04.901444  311280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:21:04.916629  311280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:21:04.916720  311280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:21:04.928339  311280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:21:04.941037  311280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:21:05.078026  311280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:21:06.293581  311280 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.215500611s)
	I0122 21:21:06.293654  311280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:21:06.523361  311280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:21:06.605551  311280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:21:06.741618  311280 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:21:06.741730  311280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:21:07.242859  311280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:21:07.742043  311280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:21:07.809147  311280 api_server.go:72] duration metric: took 1.067528792s to wait for apiserver process to appear ...
	I0122 21:21:07.809183  311280 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:21:07.809212  311280 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0122 21:21:07.809784  311280 api_server.go:269] stopped: https://192.168.39.10:8443/healthz: Get "https://192.168.39.10:8443/healthz": dial tcp 192.168.39.10:8443: connect: connection refused
	I0122 21:21:08.309596  311280 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0122 21:21:10.830267  311280 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:21:10.830309  311280 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:21:10.830332  311280 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0122 21:21:10.877796  311280 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:21:10.877847  311280 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:21:11.309362  311280 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0122 21:21:11.315054  311280 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:21:11.315104  311280 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:21:11.809820  311280 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0122 21:21:11.826024  311280 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:21:11.826083  311280 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:21:12.309449  311280 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0122 21:21:12.326053  311280 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0122 21:21:12.338708  311280 api_server.go:141] control plane version: v1.32.1
	I0122 21:21:12.338769  311280 api_server.go:131] duration metric: took 4.529575859s to wait for apiserver health ...
	I0122 21:21:12.338786  311280 cni.go:84] Creating CNI manager for ""
	I0122 21:21:12.338796  311280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:21:12.340718  311280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:21:12.342379  311280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:21:12.374063  311280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:21:12.412316  311280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:21:12.427680  311280 system_pods.go:59] 8 kube-system pods found
	I0122 21:21:12.427750  311280 system_pods.go:61] "coredns-668d6bf9bc-6lt8v" [b582572f-7456-4ad5-a17d-29a0d59ffbac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:21:12.427765  311280 system_pods.go:61] "etcd-no-preload-806477" [63732e1b-2a22-4c76-a34b-75824561a734] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:21:12.427783  311280 system_pods.go:61] "kube-apiserver-no-preload-806477" [01df1e07-3b3b-4867-be43-b878231782ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:21:12.427793  311280 system_pods.go:61] "kube-controller-manager-no-preload-806477" [19907ddf-041c-44cc-8960-5f8fee973ebb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:21:12.427804  311280 system_pods.go:61] "kube-proxy-5sxcc" [dbf14d70-7dc0-4cc5-b09e-430a04f09bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0122 21:21:12.427815  311280 system_pods.go:61] "kube-scheduler-no-preload-806477" [5c8afe20-91c0-468c-a2ca-c1af9614b5dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:21:12.427825  311280 system_pods.go:61] "metrics-server-f79f97bbb-pzkl6" [df4dc4f2-516c-4dbe-a445-c3ca90db9b1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:21:12.427837  311280 system_pods.go:61] "storage-provisioner" [081744f8-d57b-4a02-8f80-d40e055598db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0122 21:21:12.427850  311280 system_pods.go:74] duration metric: took 15.499552ms to wait for pod list to return data ...
	I0122 21:21:12.427862  311280 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:21:12.433712  311280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:21:12.433761  311280 node_conditions.go:123] node cpu capacity is 2
	I0122 21:21:12.433779  311280 node_conditions.go:105] duration metric: took 5.907718ms to run NodePressure ...
	I0122 21:21:12.433807  311280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:21:12.925648  311280 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0122 21:21:12.932353  311280 kubeadm.go:739] kubelet initialised
	I0122 21:21:12.932394  311280 kubeadm.go:740] duration metric: took 6.705592ms waiting for restarted kubelet to initialise ...
	I0122 21:21:12.932411  311280 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:21:12.954048  311280 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-6lt8v" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:14.964072  311280 pod_ready.go:103] pod "coredns-668d6bf9bc-6lt8v" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:17.461138  311280 pod_ready.go:103] pod "coredns-668d6bf9bc-6lt8v" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:19.461697  311280 pod_ready.go:93] pod "coredns-668d6bf9bc-6lt8v" in "kube-system" namespace has status "Ready":"True"
	I0122 21:21:19.461722  311280 pod_ready.go:82] duration metric: took 6.507623279s for pod "coredns-668d6bf9bc-6lt8v" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:19.461734  311280 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:19.467059  311280 pod_ready.go:93] pod "etcd-no-preload-806477" in "kube-system" namespace has status "Ready":"True"
	I0122 21:21:19.467086  311280 pod_ready.go:82] duration metric: took 5.344461ms for pod "etcd-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:19.467096  311280 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:21.474252  311280 pod_ready.go:103] pod "kube-apiserver-no-preload-806477" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:23.474541  311280 pod_ready.go:103] pod "kube-apiserver-no-preload-806477" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:25.476711  311280 pod_ready.go:103] pod "kube-apiserver-no-preload-806477" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:25.975513  311280 pod_ready.go:93] pod "kube-apiserver-no-preload-806477" in "kube-system" namespace has status "Ready":"True"
	I0122 21:21:25.975539  311280 pod_ready.go:82] duration metric: took 6.508435763s for pod "kube-apiserver-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:25.975550  311280 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:25.983359  311280 pod_ready.go:93] pod "kube-controller-manager-no-preload-806477" in "kube-system" namespace has status "Ready":"True"
	I0122 21:21:25.983392  311280 pod_ready.go:82] duration metric: took 7.833787ms for pod "kube-controller-manager-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:25.983409  311280 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5sxcc" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:25.990173  311280 pod_ready.go:93] pod "kube-proxy-5sxcc" in "kube-system" namespace has status "Ready":"True"
	I0122 21:21:25.990360  311280 pod_ready.go:82] duration metric: took 6.942488ms for pod "kube-proxy-5sxcc" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:25.990374  311280 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:25.998007  311280 pod_ready.go:93] pod "kube-scheduler-no-preload-806477" in "kube-system" namespace has status "Ready":"True"
	I0122 21:21:25.998044  311280 pod_ready.go:82] duration metric: took 7.661262ms for pod "kube-scheduler-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:25.998059  311280 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace to be "Ready" ...
	I0122 21:21:28.007206  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:30.507566  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:32.509666  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:34.512580  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:36.518100  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:39.007231  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:41.007346  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:43.007636  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:45.011144  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:47.511007  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:50.006165  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:52.506475  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:55.004973  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:57.006670  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:21:59.509200  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:01.509704  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:04.007796  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:06.510712  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:09.006119  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:11.509491  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:13.510593  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:16.004473  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:18.006505  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:20.508820  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:23.006016  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:25.006109  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:27.507694  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:30.005550  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:32.508278  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:35.006788  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:37.505840  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:39.507976  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:42.006036  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:44.006731  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:46.506087  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:48.509788  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:50.510307  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:53.008186  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:55.512179  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:57.512678  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:00.006001  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:02.006261  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:04.007698  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:06.505307  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:08.512750  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:11.005579  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:13.005777  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:15.006390  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:17.006549  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:19.009067  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:21.508437  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:23.513449  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:26.005699  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:28.007802  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:30.508864  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:33.011822  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:35.504395  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:37.505857  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:40.005542  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:42.005846  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:44.007039  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:46.510113  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:49.004820  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:51.005959  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:53.006217  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:55.507416  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:58.004676  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:00.005053  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:02.005288  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:04.005936  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:06.506426  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:08.509417  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:11.006237  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:13.506587  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:15.507499  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:18.007278  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:20.506751  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:22.507445  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:24.509945  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:27.006214  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:29.504846  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:31.506407  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:33.506686  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:36.006246  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:38.507788  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:41.004489  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:43.006278  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:45.506969  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:47.507029  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:49.510030  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:52.005046  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:54.005333  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:56.505428  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:58.506174  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:00.510077  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:03.004998  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:05.006178  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:07.507046  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:09.507747  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:11.508019  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:14.005850  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:16.007147  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:18.506313  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:20.509119  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:23.005499  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:25.508379  311280 pod_ready.go:103] pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:25.998342  311280 pod_ready.go:82] duration metric: took 4m0.000263614s for pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace to be "Ready" ...
	E0122 21:25:25.998375  311280 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-pzkl6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0122 21:25:25.998417  311280 pod_ready.go:39] duration metric: took 4m13.065988936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:25:25.998457  311280 kubeadm.go:597] duration metric: took 4m21.272799561s to restartPrimaryControlPlane
	W0122 21:25:25.998557  311280 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0122 21:25:25.998597  311280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:25:54.058835  311280 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.060202738s)
	I0122 21:25:54.058948  311280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:25:54.080085  311280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:25:54.092400  311280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:25:54.105659  311280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:25:54.105687  311280 kubeadm.go:157] found existing configuration files:
	
	I0122 21:25:54.105737  311280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:25:54.117710  311280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:25:54.117793  311280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:25:54.130377  311280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:25:54.142258  311280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:25:54.142329  311280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:25:54.154087  311280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:25:54.165720  311280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:25:54.165797  311280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:25:54.177388  311280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:25:54.189103  311280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:25:54.189193  311280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:25:54.201758  311280 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:25:54.260366  311280 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0122 21:25:54.260532  311280 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:25:54.399184  311280 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:25:54.399335  311280 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:25:54.399518  311280 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0122 21:25:54.411658  311280 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:25:54.413776  311280 out.go:235]   - Generating certificates and keys ...
	I0122 21:25:54.413893  311280 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:25:54.414001  311280 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:25:54.414124  311280 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:25:54.414228  311280 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:25:54.414316  311280 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:25:54.414377  311280 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:25:54.414461  311280 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:25:54.414551  311280 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:25:54.414682  311280 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:25:54.414787  311280 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:25:54.414824  311280 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:25:54.414878  311280 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:25:54.510873  311280 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:25:54.712442  311280 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0122 21:25:55.004095  311280 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:25:55.111746  311280 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:25:55.308353  311280 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:25:55.308971  311280 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:25:55.311764  311280 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:25:55.314444  311280 out.go:235]   - Booting up control plane ...
	I0122 21:25:55.314612  311280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:25:55.314709  311280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:25:55.314792  311280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:25:55.340047  311280 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:25:55.348057  311280 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:25:55.348134  311280 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:25:55.507869  311280 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0122 21:25:55.508066  311280 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0122 21:25:56.009217  311280 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.407634ms
	I0122 21:25:56.009339  311280 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0122 21:26:02.013570  311280 kubeadm.go:310] [api-check] The API server is healthy after 6.002764789s
	I0122 21:26:02.046114  311280 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0122 21:26:02.074972  311280 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0122 21:26:02.114366  311280 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0122 21:26:02.114624  311280 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-806477 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0122 21:26:02.127604  311280 kubeadm.go:310] [bootstrap-token] Using token: 547gpo.h8i717hajihxedil
	I0122 21:26:02.129183  311280 out.go:235]   - Configuring RBAC rules ...
	I0122 21:26:02.129338  311280 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0122 21:26:02.136210  311280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0122 21:26:02.145294  311280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0122 21:26:02.153577  311280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0122 21:26:02.158283  311280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0122 21:26:02.163459  311280 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0122 21:26:02.424804  311280 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0122 21:26:02.897186  311280 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0122 21:26:03.426034  311280 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0122 21:26:03.426083  311280 kubeadm.go:310] 
	I0122 21:26:03.426168  311280 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0122 21:26:03.426216  311280 kubeadm.go:310] 
	I0122 21:26:03.426327  311280 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0122 21:26:03.426343  311280 kubeadm.go:310] 
	I0122 21:26:03.426414  311280 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0122 21:26:03.426530  311280 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0122 21:26:03.426628  311280 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0122 21:26:03.426647  311280 kubeadm.go:310] 
	I0122 21:26:03.426729  311280 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0122 21:26:03.426745  311280 kubeadm.go:310] 
	I0122 21:26:03.426821  311280 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0122 21:26:03.426829  311280 kubeadm.go:310] 
	I0122 21:26:03.426902  311280 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0122 21:26:03.427003  311280 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0122 21:26:03.427111  311280 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0122 21:26:03.427120  311280 kubeadm.go:310] 
	I0122 21:26:03.427247  311280 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0122 21:26:03.427354  311280 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0122 21:26:03.427364  311280 kubeadm.go:310] 
	I0122 21:26:03.427483  311280 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 547gpo.h8i717hajihxedil \
	I0122 21:26:03.427637  311280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e447fe88d4e43aa7dedab9e7f78d5319a1771f66f483469eded588e9e0904b1d \
	I0122 21:26:03.427680  311280 kubeadm.go:310] 	--control-plane 
	I0122 21:26:03.427689  311280 kubeadm.go:310] 
	I0122 21:26:03.427791  311280 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0122 21:26:03.427804  311280 kubeadm.go:310] 
	I0122 21:26:03.427908  311280 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 547gpo.h8i717hajihxedil \
	I0122 21:26:03.428043  311280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e447fe88d4e43aa7dedab9e7f78d5319a1771f66f483469eded588e9e0904b1d 
	I0122 21:26:03.428628  311280 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:26:03.428797  311280 cni.go:84] Creating CNI manager for ""
	I0122 21:26:03.428819  311280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:26:03.430790  311280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:26:03.432253  311280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:26:03.445393  311280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:26:03.469901  311280 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 21:26:03.470023  311280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:26:03.470045  311280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-806477 minikube.k8s.io/updated_at=2025_01_22T21_26_03_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4 minikube.k8s.io/name=no-preload-806477 minikube.k8s.io/primary=true
	I0122 21:26:03.500789  311280 ops.go:34] apiserver oom_adj: -16
	I0122 21:26:03.746450  311280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:26:04.246656  311280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:26:04.747201  311280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:26:05.246501  311280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:26:05.747401  311280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:26:06.247030  311280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:26:06.747242  311280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:26:07.247529  311280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:26:07.747455  311280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:26:08.247145  311280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:26:08.395088  311280 kubeadm.go:1113] duration metric: took 4.925140337s to wait for elevateKubeSystemPrivileges
	I0122 21:26:08.395134  311280 kubeadm.go:394] duration metric: took 5m3.732250311s to StartCluster
	I0122 21:26:08.395159  311280 settings.go:142] acquiring lock: {Name:mkd1753661c2351dd9318eb8eab12d9164b6fe23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:26:08.395261  311280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:26:08.397818  311280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:26:08.398329  311280 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 21:26:08.398563  311280 config.go:182] Loaded profile config "no-preload-806477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:26:08.398637  311280 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0122 21:26:08.398741  311280 addons.go:69] Setting storage-provisioner=true in profile "no-preload-806477"
	I0122 21:26:08.398765  311280 addons.go:238] Setting addon storage-provisioner=true in "no-preload-806477"
	W0122 21:26:08.398773  311280 addons.go:247] addon storage-provisioner should already be in state true
	I0122 21:26:08.398802  311280 host.go:66] Checking if "no-preload-806477" exists ...
	I0122 21:26:08.399267  311280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:26:08.399315  311280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:26:08.399543  311280 addons.go:69] Setting default-storageclass=true in profile "no-preload-806477"
	I0122 21:26:08.399570  311280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-806477"
	I0122 21:26:08.399580  311280 addons.go:69] Setting metrics-server=true in profile "no-preload-806477"
	I0122 21:26:08.399606  311280 addons.go:238] Setting addon metrics-server=true in "no-preload-806477"
	I0122 21:26:08.399609  311280 addons.go:69] Setting dashboard=true in profile "no-preload-806477"
	I0122 21:26:08.399634  311280 addons.go:238] Setting addon dashboard=true in "no-preload-806477"
	W0122 21:26:08.399645  311280 addons.go:247] addon dashboard should already be in state true
	I0122 21:26:08.399680  311280 host.go:66] Checking if "no-preload-806477" exists ...
	W0122 21:26:08.399615  311280 addons.go:247] addon metrics-server should already be in state true
	I0122 21:26:08.399714  311280 host.go:66] Checking if "no-preload-806477" exists ...
	I0122 21:26:08.399760  311280 out.go:177] * Verifying Kubernetes components...
	I0122 21:26:08.399975  311280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:26:08.400028  311280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:26:08.400117  311280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:26:08.400139  311280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:26:08.400146  311280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:26:08.400182  311280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:26:08.401679  311280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:26:08.419431  311280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40149
	I0122 21:26:08.420220  311280 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:26:08.420882  311280 main.go:141] libmachine: Using API Version  1
	I0122 21:26:08.420916  311280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:26:08.421355  311280 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:26:08.421995  311280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38015
	I0122 21:26:08.422152  311280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38431
	I0122 21:26:08.422811  311280 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:26:08.423573  311280 main.go:141] libmachine: Using API Version  1
	I0122 21:26:08.423596  311280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:26:08.423753  311280 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:26:08.423970  311280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:26:08.424037  311280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:26:08.424340  311280 main.go:141] libmachine: Using API Version  1
	I0122 21:26:08.424364  311280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:26:08.424753  311280 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:26:08.424826  311280 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:26:08.425247  311280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:26:08.425282  311280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:26:08.425485  311280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39739
	I0122 21:26:08.425942  311280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:26:08.425979  311280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:26:08.426112  311280 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:26:08.426847  311280 main.go:141] libmachine: Using API Version  1
	I0122 21:26:08.426877  311280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:26:08.427400  311280 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:26:08.427616  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetState
	I0122 21:26:08.431570  311280 addons.go:238] Setting addon default-storageclass=true in "no-preload-806477"
	W0122 21:26:08.431599  311280 addons.go:247] addon default-storageclass should already be in state true
	I0122 21:26:08.431631  311280 host.go:66] Checking if "no-preload-806477" exists ...
	I0122 21:26:08.431905  311280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:26:08.431968  311280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:26:08.448802  311280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35437
	I0122 21:26:08.449507  311280 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:26:08.452255  311280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44167
	I0122 21:26:08.452438  311280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41455
	I0122 21:26:08.452687  311280 main.go:141] libmachine: Using API Version  1
	I0122 21:26:08.452701  311280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:26:08.453450  311280 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:26:08.453538  311280 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:26:08.454032  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetState
	I0122 21:26:08.454220  311280 main.go:141] libmachine: Using API Version  1
	I0122 21:26:08.454238  311280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:26:08.455232  311280 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:26:08.455980  311280 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:26:08.456071  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetState
	I0122 21:26:08.458040  311280 main.go:141] libmachine: Using API Version  1
	I0122 21:26:08.458069  311280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:26:08.458110  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:26:08.458825  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:26:08.458912  311280 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:26:08.459177  311280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38357
	I0122 21:26:08.459380  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetState
	I0122 21:26:08.460040  311280 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0122 21:26:08.460136  311280 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:26:08.460928  311280 main.go:141] libmachine: Using API Version  1
	I0122 21:26:08.460947  311280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:26:08.460926  311280 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:26:08.461260  311280 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:26:08.461572  311280 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0122 21:26:08.461595  311280 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0122 21:26:08.461619  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:26:08.461902  311280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:26:08.461911  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:26:08.461942  311280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:26:08.462865  311280 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:26:08.462884  311280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 21:26:08.462904  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:26:08.463736  311280 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0122 21:26:08.465132  311280 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0122 21:26:08.466488  311280 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0122 21:26:08.466516  311280 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0122 21:26:08.466551  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:26:08.466985  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:26:08.467450  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:26:08.467474  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:26:08.467725  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:26:08.467936  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:26:08.468090  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:26:08.468304  311280 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/no-preload-806477/id_rsa Username:docker}
	I0122 21:26:08.469736  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:26:08.470143  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:26:08.470675  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:26:08.470706  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:26:08.470737  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:26:08.470750  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:26:08.471026  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:26:08.471090  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:26:08.471272  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:26:08.471289  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:26:08.471414  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:26:08.471489  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:26:08.471670  311280 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/no-preload-806477/id_rsa Username:docker}
	I0122 21:26:08.471750  311280 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/no-preload-806477/id_rsa Username:docker}
	I0122 21:26:08.481859  311280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45927
	I0122 21:26:08.482441  311280 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:26:08.482985  311280 main.go:141] libmachine: Using API Version  1
	I0122 21:26:08.483008  311280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:26:08.483393  311280 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:26:08.483633  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetState
	I0122 21:26:08.485250  311280 main.go:141] libmachine: (no-preload-806477) Calling .DriverName
	I0122 21:26:08.485504  311280 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 21:26:08.485522  311280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 21:26:08.485545  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHHostname
	I0122 21:26:08.488385  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:26:08.488749  311280 main.go:141] libmachine: (no-preload-806477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:52:4a", ip: ""} in network mk-no-preload-806477: {Iface:virbr1 ExpiryTime:2025-01-22 22:20:36 +0000 UTC Type:0 Mac:52:54:00:66:52:4a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:no-preload-806477 Clientid:01:52:54:00:66:52:4a}
	I0122 21:26:08.488785  311280 main.go:141] libmachine: (no-preload-806477) DBG | domain no-preload-806477 has defined IP address 192.168.39.10 and MAC address 52:54:00:66:52:4a in network mk-no-preload-806477
	I0122 21:26:08.489087  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHPort
	I0122 21:26:08.489327  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHKeyPath
	I0122 21:26:08.489478  311280 main.go:141] libmachine: (no-preload-806477) Calling .GetSSHUsername
	I0122 21:26:08.489603  311280 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/no-preload-806477/id_rsa Username:docker}
	I0122 21:26:08.667798  311280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:26:08.700575  311280 node_ready.go:35] waiting up to 6m0s for node "no-preload-806477" to be "Ready" ...
	I0122 21:26:08.740017  311280 node_ready.go:49] node "no-preload-806477" has status "Ready":"True"
	I0122 21:26:08.740045  311280 node_ready.go:38] duration metric: took 39.435715ms for node "no-preload-806477" to be "Ready" ...
	I0122 21:26:08.740058  311280 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:26:08.758838  311280 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-n5dr4" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:08.786982  311280 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0122 21:26:08.787015  311280 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0122 21:26:08.814871  311280 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0122 21:26:08.814896  311280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0122 21:26:08.816207  311280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:26:08.821243  311280 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0122 21:26:08.821286  311280 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0122 21:26:08.879850  311280 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0122 21:26:08.879898  311280 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0122 21:26:08.931453  311280 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0122 21:26:08.931493  311280 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0122 21:26:08.962422  311280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 21:26:09.013360  311280 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0122 21:26:09.013400  311280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0122 21:26:09.023713  311280 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:26:09.023751  311280 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0122 21:26:09.137556  311280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:26:09.170229  311280 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0122 21:26:09.170274  311280 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0122 21:26:09.281403  311280 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0122 21:26:09.281438  311280 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0122 21:26:09.365738  311280 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0122 21:26:09.365771  311280 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0122 21:26:09.511474  311280 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0122 21:26:09.511514  311280 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0122 21:26:09.736974  311280 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:26:09.737019  311280 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0122 21:26:09.887602  311280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:26:10.393174  311280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.576913488s)
	I0122 21:26:10.393249  311280 main.go:141] libmachine: Making call to close driver server
	I0122 21:26:10.393251  311280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.430789335s)
	I0122 21:26:10.393304  311280 main.go:141] libmachine: Making call to close driver server
	I0122 21:26:10.393265  311280 main.go:141] libmachine: (no-preload-806477) Calling .Close
	I0122 21:26:10.393320  311280 main.go:141] libmachine: (no-preload-806477) Calling .Close
	I0122 21:26:10.393886  311280 main.go:141] libmachine: (no-preload-806477) DBG | Closing plugin on server side
	I0122 21:26:10.393941  311280 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:26:10.393949  311280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:26:10.393958  311280 main.go:141] libmachine: Making call to close driver server
	I0122 21:26:10.393967  311280 main.go:141] libmachine: (no-preload-806477) Calling .Close
	I0122 21:26:10.394125  311280 main.go:141] libmachine: (no-preload-806477) DBG | Closing plugin on server side
	I0122 21:26:10.394455  311280 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:26:10.394472  311280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:26:10.394537  311280 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:26:10.394556  311280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:26:10.394571  311280 main.go:141] libmachine: Making call to close driver server
	I0122 21:26:10.394580  311280 main.go:141] libmachine: (no-preload-806477) Calling .Close
	I0122 21:26:10.394981  311280 main.go:141] libmachine: (no-preload-806477) DBG | Closing plugin on server side
	I0122 21:26:10.395041  311280 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:26:10.395079  311280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:26:10.421134  311280 main.go:141] libmachine: Making call to close driver server
	I0122 21:26:10.421163  311280 main.go:141] libmachine: (no-preload-806477) Calling .Close
	I0122 21:26:10.421505  311280 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:26:10.421531  311280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:26:10.886382  311280 pod_ready.go:103] pod "coredns-668d6bf9bc-n5dr4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:26:11.252697  311280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.115084152s)
	I0122 21:26:11.252765  311280 main.go:141] libmachine: Making call to close driver server
	I0122 21:26:11.252784  311280 main.go:141] libmachine: (no-preload-806477) Calling .Close
	I0122 21:26:11.253125  311280 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:26:11.253145  311280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:26:11.253155  311280 main.go:141] libmachine: Making call to close driver server
	I0122 21:26:11.253163  311280 main.go:141] libmachine: (no-preload-806477) Calling .Close
	I0122 21:26:11.253658  311280 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:26:11.253679  311280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:26:11.253693  311280 addons.go:479] Verifying addon metrics-server=true in "no-preload-806477"
	I0122 21:26:12.289444  311280 pod_ready.go:93] pod "coredns-668d6bf9bc-n5dr4" in "kube-system" namespace has status "Ready":"True"
	I0122 21:26:12.289474  311280 pod_ready.go:82] duration metric: took 3.530600151s for pod "coredns-668d6bf9bc-n5dr4" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:12.289485  311280 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-t7m8w" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:12.395899  311280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.508239233s)
	I0122 21:26:12.395979  311280 main.go:141] libmachine: Making call to close driver server
	I0122 21:26:12.396002  311280 main.go:141] libmachine: (no-preload-806477) Calling .Close
	I0122 21:26:12.396405  311280 main.go:141] libmachine: (no-preload-806477) DBG | Closing plugin on server side
	I0122 21:26:12.396453  311280 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:26:12.396467  311280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:26:12.396480  311280 main.go:141] libmachine: Making call to close driver server
	I0122 21:26:12.396504  311280 main.go:141] libmachine: (no-preload-806477) Calling .Close
	I0122 21:26:12.396792  311280 main.go:141] libmachine: (no-preload-806477) DBG | Closing plugin on server side
	I0122 21:26:12.396836  311280 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:26:12.396853  311280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:26:12.398836  311280 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-806477 addons enable metrics-server
	
	I0122 21:26:12.400438  311280 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0122 21:26:12.401872  311280 addons.go:514] duration metric: took 4.00322976s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0122 21:26:13.797934  311280 pod_ready.go:93] pod "coredns-668d6bf9bc-t7m8w" in "kube-system" namespace has status "Ready":"True"
	I0122 21:26:13.797961  311280 pod_ready.go:82] duration metric: took 1.508468468s for pod "coredns-668d6bf9bc-t7m8w" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:13.797973  311280 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:13.804972  311280 pod_ready.go:93] pod "etcd-no-preload-806477" in "kube-system" namespace has status "Ready":"True"
	I0122 21:26:13.805015  311280 pod_ready.go:82] duration metric: took 7.032925ms for pod "etcd-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:13.805031  311280 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:13.812271  311280 pod_ready.go:93] pod "kube-apiserver-no-preload-806477" in "kube-system" namespace has status "Ready":"True"
	I0122 21:26:13.812304  311280 pod_ready.go:82] duration metric: took 7.262634ms for pod "kube-apiserver-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:13.812321  311280 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:15.320710  311280 pod_ready.go:93] pod "kube-controller-manager-no-preload-806477" in "kube-system" namespace has status "Ready":"True"
	I0122 21:26:15.320740  311280 pod_ready.go:82] duration metric: took 1.508411382s for pod "kube-controller-manager-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:15.320752  311280 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-22v8c" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:15.328725  311280 pod_ready.go:93] pod "kube-proxy-22v8c" in "kube-system" namespace has status "Ready":"True"
	I0122 21:26:15.328758  311280 pod_ready.go:82] duration metric: took 7.996938ms for pod "kube-proxy-22v8c" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:15.328770  311280 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:15.464374  311280 pod_ready.go:93] pod "kube-scheduler-no-preload-806477" in "kube-system" namespace has status "Ready":"True"
	I0122 21:26:15.464415  311280 pod_ready.go:82] duration metric: took 135.635474ms for pod "kube-scheduler-no-preload-806477" in "kube-system" namespace to be "Ready" ...
	I0122 21:26:15.464428  311280 pod_ready.go:39] duration metric: took 6.724359277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:26:15.464453  311280 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:26:15.464526  311280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:15.494844  311280 api_server.go:72] duration metric: took 7.0964516s to wait for apiserver process to appear ...
	I0122 21:26:15.494879  311280 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:26:15.494905  311280 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0122 21:26:15.500545  311280 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0122 21:26:15.501927  311280 api_server.go:141] control plane version: v1.32.1
	I0122 21:26:15.501955  311280 api_server.go:131] duration metric: took 7.068634ms to wait for apiserver health ...
	I0122 21:26:15.501966  311280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:26:15.667291  311280 system_pods.go:59] 9 kube-system pods found
	I0122 21:26:15.667326  311280 system_pods.go:61] "coredns-668d6bf9bc-n5dr4" [a12f3179-6eca-4383-b99b-36acf5a5fc5d] Running
	I0122 21:26:15.667331  311280 system_pods.go:61] "coredns-668d6bf9bc-t7m8w" [6eab222c-ae91-4937-85a7-8ebe42d731a4] Running
	I0122 21:26:15.667335  311280 system_pods.go:61] "etcd-no-preload-806477" [bc2208c8-d23c-4550-bcac-f3d09ffe224f] Running
	I0122 21:26:15.667341  311280 system_pods.go:61] "kube-apiserver-no-preload-806477" [b87db01e-ed8e-415b-8974-faca6c0409cd] Running
	I0122 21:26:15.667345  311280 system_pods.go:61] "kube-controller-manager-no-preload-806477" [ec8ffaaf-87ae-44b7-aa26-483772669b53] Running
	I0122 21:26:15.667348  311280 system_pods.go:61] "kube-proxy-22v8c" [22d04c08-85f0-4b37-b855-5de9a1b827ed] Running
	I0122 21:26:15.667351  311280 system_pods.go:61] "kube-scheduler-no-preload-806477" [518f574e-8d93-46a3-a990-7450175b176b] Running
	I0122 21:26:15.667358  311280 system_pods.go:61] "metrics-server-f79f97bbb-wnc4r" [0c5809fa-0fa9-4635-bc21-3dc0e9ea6e74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:26:15.667362  311280 system_pods.go:61] "storage-provisioner" [0b817f35-8247-4f27-8456-829445acac1d] Running
	I0122 21:26:15.667369  311280 system_pods.go:74] duration metric: took 165.397187ms to wait for pod list to return data ...
	I0122 21:26:15.667378  311280 default_sa.go:34] waiting for default service account to be created ...
	I0122 21:26:15.863546  311280 default_sa.go:45] found service account: "default"
	I0122 21:26:15.863592  311280 default_sa.go:55] duration metric: took 196.207618ms for default service account to be created ...
	I0122 21:26:15.863608  311280 system_pods.go:137] waiting for k8s-apps to be running ...
	I0122 21:26:16.066670  311280 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-806477 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-806477 -n no-preload-806477
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-806477 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-806477 logs -n 25: (1.685109666s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-806477                                   | no-preload-806477            | jenkins | v1.35.0 | 22 Jan 25 21:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-635179                 | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-181389        | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991469       | default-k8s-diff-port-991469 | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991469 | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC |                     |
	|         | default-k8s-diff-port-991469                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-181389                              | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC | 22 Jan 25 21:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-181389             | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC | 22 Jan 25 21:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-181389                              | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-635179 image list                          | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	| delete  | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	| start   | -p newest-cni-489789 --memory=2200 --alsologtostderr   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-489789             | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-489789                  | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-489789 --memory=2200 --alsologtostderr   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-489789 image list                           | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	| delete  | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	| delete  | -p old-k8s-version-181389                              | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:46 UTC | 22 Jan 25 21:46 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 21:27:23
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 21:27:23.911116  314650 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:27:23.911744  314650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:27:23.911765  314650 out.go:358] Setting ErrFile to fd 2...
	I0122 21:27:23.911774  314650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:27:23.912250  314650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:27:23.913222  314650 out.go:352] Setting JSON to false
	I0122 21:27:23.914762  314650 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":14990,"bootTime":1737566254,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:27:23.914894  314650 start.go:139] virtualization: kvm guest
	I0122 21:27:23.916750  314650 out.go:177] * [newest-cni-489789] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:27:23.918320  314650 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:27:23.918320  314650 notify.go:220] Checking for updates...
	I0122 21:27:23.920824  314650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:27:23.922296  314650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:23.923574  314650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:27:23.924769  314650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:27:23.926102  314650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:27:23.927578  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:23.928058  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:23.928125  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:23.944579  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34391
	I0122 21:27:23.945073  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:23.945640  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:23.945664  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:23.946073  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:23.946377  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:23.946689  314650 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:27:23.947048  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:23.947102  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:23.963420  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
	I0122 21:27:23.963873  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:23.964454  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:23.964502  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:23.964926  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:23.965154  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:24.005605  314650 out.go:177] * Using the kvm2 driver based on existing profile
	I0122 21:27:24.007129  314650 start.go:297] selected driver: kvm2
	I0122 21:27:24.007153  314650 start.go:901] validating driver "kvm2" against &{Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:24.007318  314650 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:27:24.008093  314650 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:27:24.008222  314650 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:27:24.024940  314650 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:27:24.025456  314650 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0122 21:27:24.025502  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:24.025549  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:24.025588  314650 start.go:340] cluster config:
	{Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:24.025695  314650 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:27:24.027752  314650 out.go:177] * Starting "newest-cni-489789" primary control-plane node in "newest-cni-489789" cluster
	I0122 21:27:24.029033  314650 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:27:24.029101  314650 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0122 21:27:24.029119  314650 cache.go:56] Caching tarball of preloaded images
	I0122 21:27:24.029287  314650 preload.go:172] Found /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0122 21:27:24.029306  314650 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0122 21:27:24.029475  314650 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/config.json ...
	I0122 21:27:24.029808  314650 start.go:360] acquireMachinesLock for newest-cni-489789: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:27:24.029874  314650 start.go:364] duration metric: took 34.85µs to acquireMachinesLock for "newest-cni-489789"
	I0122 21:27:24.029897  314650 start.go:96] Skipping create...Using existing machine configuration
	I0122 21:27:24.029908  314650 fix.go:54] fixHost starting: 
	I0122 21:27:24.030383  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:24.030486  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:24.046512  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I0122 21:27:24.047013  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:24.047605  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:24.047640  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:24.048047  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:24.048290  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:24.048464  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:24.050271  314650 fix.go:112] recreateIfNeeded on newest-cni-489789: state=Stopped err=<nil>
	I0122 21:27:24.050304  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	W0122 21:27:24.050473  314650 fix.go:138] unexpected machine state, will restart: <nil>
	I0122 21:27:24.052496  314650 out.go:177] * Restarting existing kvm2 VM for "newest-cni-489789" ...
	I0122 21:27:21.730303  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:21.747123  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:21.747212  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:21.793769  312675 cri.go:89] found id: ""
	I0122 21:27:21.793807  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.793827  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:21.793835  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:21.793912  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:21.840045  312675 cri.go:89] found id: ""
	I0122 21:27:21.840088  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.840101  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:21.840109  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:21.840187  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:21.885265  312675 cri.go:89] found id: ""
	I0122 21:27:21.885302  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.885314  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:21.885323  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:21.885404  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:21.937734  312675 cri.go:89] found id: ""
	I0122 21:27:21.937768  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.937777  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:21.937783  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:21.937844  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:21.989238  312675 cri.go:89] found id: ""
	I0122 21:27:21.989276  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.989294  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:21.989300  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:21.989377  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:22.035837  312675 cri.go:89] found id: ""
	I0122 21:27:22.035921  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.035934  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:22.035944  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:22.036016  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:22.091690  312675 cri.go:89] found id: ""
	I0122 21:27:22.091731  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.091745  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:22.091754  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:22.091828  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:22.149775  312675 cri.go:89] found id: ""
	I0122 21:27:22.149888  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.149913  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:22.149958  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:22.150005  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:22.213610  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:22.213665  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:22.233970  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:22.234014  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:22.318579  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:22.318606  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:22.318622  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:22.422850  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:22.422899  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:24.974063  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:24.990751  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:24.990850  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:25.036044  312675 cri.go:89] found id: ""
	I0122 21:27:25.036082  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.036094  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:25.036103  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:25.036173  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:25.078700  312675 cri.go:89] found id: ""
	I0122 21:27:25.078736  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.078748  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:25.078759  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:25.078829  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:25.134919  312675 cri.go:89] found id: ""
	I0122 21:27:25.134971  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.134984  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:25.134994  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:25.135075  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:25.183649  312675 cri.go:89] found id: ""
	I0122 21:27:25.183684  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.183695  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:25.183704  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:25.183778  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:25.240357  312675 cri.go:89] found id: ""
	I0122 21:27:25.240401  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.240414  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:25.240425  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:25.240555  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:25.284093  312675 cri.go:89] found id: ""
	I0122 21:27:25.284132  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.284141  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:25.284149  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:25.284218  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:25.328590  312675 cri.go:89] found id: ""
	I0122 21:27:25.328621  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.328632  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:25.328641  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:25.328710  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:25.378479  312675 cri.go:89] found id: ""
	I0122 21:27:25.378517  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.378529  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:25.378543  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:25.378559  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:25.433767  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:25.433800  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:24.053834  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Start
	I0122 21:27:24.054152  314650 main.go:141] libmachine: (newest-cni-489789) starting domain...
	I0122 21:27:24.054175  314650 main.go:141] libmachine: (newest-cni-489789) ensuring networks are active...
	I0122 21:27:24.055132  314650 main.go:141] libmachine: (newest-cni-489789) Ensuring network default is active
	I0122 21:27:24.055534  314650 main.go:141] libmachine: (newest-cni-489789) Ensuring network mk-newest-cni-489789 is active
	I0122 21:27:24.055963  314650 main.go:141] libmachine: (newest-cni-489789) getting domain XML...
	I0122 21:27:24.056886  314650 main.go:141] libmachine: (newest-cni-489789) creating domain...
	I0122 21:27:25.457503  314650 main.go:141] libmachine: (newest-cni-489789) waiting for IP...
	I0122 21:27:25.458754  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:25.459431  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:25.459544  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:25.459394  314684 retry.go:31] will retry after 258.579884ms: waiting for domain to come up
	I0122 21:27:25.720098  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:25.720657  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:25.720704  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:25.720649  314684 retry.go:31] will retry after 347.192205ms: waiting for domain to come up
	I0122 21:27:26.069095  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.069843  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.069880  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.069813  314684 retry.go:31] will retry after 318.422908ms: waiting for domain to come up
	I0122 21:27:26.390692  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.391374  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.391431  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.391350  314684 retry.go:31] will retry after 516.847382ms: waiting for domain to come up
	I0122 21:27:26.910252  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.910831  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.910862  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.910801  314684 retry.go:31] will retry after 657.195872ms: waiting for domain to come up
	I0122 21:27:27.569972  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:27.570617  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:27.570651  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:27.570590  314684 retry.go:31] will retry after 601.660948ms: waiting for domain to come up
	I0122 21:27:28.173427  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:28.174022  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:28.174065  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:28.173988  314684 retry.go:31] will retry after 839.292486ms: waiting for domain to come up
	I0122 21:27:25.497717  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:25.497767  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:25.530904  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:25.530961  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:25.631676  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:25.631701  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:25.631717  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:28.221852  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:28.236702  312675 kubeadm.go:597] duration metric: took 4m3.036103838s to restartPrimaryControlPlane
	W0122 21:27:28.236803  312675 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0122 21:27:28.236837  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:27:29.014929  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:29.015535  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:29.015569  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:29.015501  314684 retry.go:31] will retry after 1.28366543s: waiting for domain to come up
	I0122 21:27:30.300346  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:30.300806  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:30.300834  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:30.300775  314684 retry.go:31] will retry after 1.437378164s: waiting for domain to come up
	I0122 21:27:31.739437  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:31.740073  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:31.740106  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:31.740043  314684 retry.go:31] will retry after 1.547235719s: waiting for domain to come up
	I0122 21:27:33.289857  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:33.290395  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:33.290452  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:33.290357  314684 retry.go:31] will retry after 2.864838858s: waiting for domain to come up
	I0122 21:27:30.647940  312675 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.411072952s)
	I0122 21:27:30.648042  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:27:30.669610  312675 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:27:30.684678  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:27:30.698168  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:27:30.698232  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:27:30.698285  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:27:30.708774  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:27:30.708855  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:27:30.720213  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:27:30.731121  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:27:30.731207  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:27:30.743153  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:27:30.754160  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:27:30.754262  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:27:30.765730  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:27:30.776902  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:27:30.776990  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:27:30.788361  312675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:27:31.040925  312675 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:27:36.157916  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:36.158675  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:36.158706  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:36.158608  314684 retry.go:31] will retry after 3.253566336s: waiting for domain to come up
	I0122 21:27:39.413761  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:39.414347  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:39.414380  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:39.414310  314684 retry.go:31] will retry after 3.952766125s: waiting for domain to come up
	I0122 21:27:43.371406  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.371943  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has current primary IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.371999  314650 main.go:141] libmachine: (newest-cni-489789) found domain IP: 192.168.50.146
	I0122 21:27:43.372024  314650 main.go:141] libmachine: (newest-cni-489789) reserving static IP address...
	I0122 21:27:43.372454  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "newest-cni-489789", mac: "52:54:00:c5:b4:d9", ip: "192.168.50.146"} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.372482  314650 main.go:141] libmachine: (newest-cni-489789) DBG | skip adding static IP to network mk-newest-cni-489789 - found existing host DHCP lease matching {name: "newest-cni-489789", mac: "52:54:00:c5:b4:d9", ip: "192.168.50.146"}
	I0122 21:27:43.372502  314650 main.go:141] libmachine: (newest-cni-489789) reserved static IP address 192.168.50.146 for domain newest-cni-489789
	I0122 21:27:43.372516  314650 main.go:141] libmachine: (newest-cni-489789) waiting for SSH...
	I0122 21:27:43.372527  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Getting to WaitForSSH function...
	I0122 21:27:43.374698  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.374984  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.375016  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.375148  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Using SSH client type: external
	I0122 21:27:43.375173  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa (-rw-------)
	I0122 21:27:43.375212  314650 main.go:141] libmachine: (newest-cni-489789) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:27:43.375232  314650 main.go:141] libmachine: (newest-cni-489789) DBG | About to run SSH command:
	I0122 21:27:43.375243  314650 main.go:141] libmachine: (newest-cni-489789) DBG | exit 0
	I0122 21:27:43.503039  314650 main.go:141] libmachine: (newest-cni-489789) DBG | SSH cmd err, output: <nil>: 
	I0122 21:27:43.503449  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetConfigRaw
	I0122 21:27:43.504138  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:43.507198  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.507562  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.507607  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.507876  314650 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/config.json ...
	I0122 21:27:43.508166  314650 machine.go:93] provisionDockerMachine start ...
	I0122 21:27:43.508196  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:43.508518  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.511111  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.511408  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.511442  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.511632  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.511842  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.512002  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.512147  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.512352  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.512624  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.512643  314650 main.go:141] libmachine: About to run SSH command:
	hostname
	I0122 21:27:43.619425  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0122 21:27:43.619472  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.619742  314650 buildroot.go:166] provisioning hostname "newest-cni-489789"
	I0122 21:27:43.619772  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.619998  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.622781  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.623242  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.623285  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.623505  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.623728  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.623892  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.624013  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.624154  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.624410  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.624432  314650 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-489789 && echo "newest-cni-489789" | sudo tee /etc/hostname
	I0122 21:27:43.747575  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-489789
	
	I0122 21:27:43.747605  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.750745  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.751080  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.751127  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.751553  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.751775  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.751918  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.752035  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.752185  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.752425  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.752465  314650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-489789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-489789/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-489789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:27:43.865258  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:27:43.865290  314650 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:27:43.865312  314650 buildroot.go:174] setting up certificates
	I0122 21:27:43.865327  314650 provision.go:84] configureAuth start
	I0122 21:27:43.865362  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.865704  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:43.868648  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.868993  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.869025  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.869222  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.871572  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.871860  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.871894  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.872044  314650 provision.go:143] copyHostCerts
	I0122 21:27:43.872109  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:27:43.872130  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:27:43.872205  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:27:43.872312  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:27:43.872321  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:27:43.872346  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:27:43.872433  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:27:43.872447  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:27:43.872471  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:27:43.872536  314650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.newest-cni-489789 san=[127.0.0.1 192.168.50.146 localhost minikube newest-cni-489789]
	I0122 21:27:44.234481  314650 provision.go:177] copyRemoteCerts
	I0122 21:27:44.234579  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:27:44.234618  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.237848  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.238297  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.238332  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.238604  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.238788  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.238988  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.239154  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.326083  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:27:44.355837  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0122 21:27:44.387644  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 21:27:44.418003  314650 provision.go:87] duration metric: took 552.65522ms to configureAuth
	I0122 21:27:44.418039  314650 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:27:44.418347  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:44.418475  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.421349  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.421796  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.421839  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.422067  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.422301  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.422470  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.422603  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.422810  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:44.423129  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:44.423156  314650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:27:44.671197  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:27:44.671232  314650 machine.go:96] duration metric: took 1.163046458s to provisionDockerMachine
	I0122 21:27:44.671247  314650 start.go:293] postStartSetup for "newest-cni-489789" (driver="kvm2")
	I0122 21:27:44.671261  314650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:27:44.671289  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.671667  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:27:44.671704  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.674811  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.675137  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.675164  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.675350  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.675624  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.675817  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.675987  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.759194  314650 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:27:44.764553  314650 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:27:44.764591  314650 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:27:44.764668  314650 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:27:44.764741  314650 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:27:44.764835  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:27:44.778239  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:27:44.807409  314650 start.go:296] duration metric: took 136.131239ms for postStartSetup
	I0122 21:27:44.807474  314650 fix.go:56] duration metric: took 20.777566838s for fixHost
	I0122 21:27:44.807580  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.810883  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.811279  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.811312  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.811472  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.811736  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.811908  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.812086  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.812268  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:44.812448  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:44.812459  314650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:27:44.915903  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737581264.870208902
	
	I0122 21:27:44.915934  314650 fix.go:216] guest clock: 1737581264.870208902
	I0122 21:27:44.915945  314650 fix.go:229] Guest: 2025-01-22 21:27:44.870208902 +0000 UTC Remote: 2025-01-22 21:27:44.807479632 +0000 UTC m=+20.941890306 (delta=62.72927ms)
	I0122 21:27:44.915983  314650 fix.go:200] guest clock delta is within tolerance: 62.72927ms
	I0122 21:27:44.915991  314650 start.go:83] releasing machines lock for "newest-cni-489789", held for 20.886101347s
	I0122 21:27:44.916019  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.916292  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:44.919374  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.919795  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.919831  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.920026  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.920725  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.920966  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.921087  314650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:27:44.921144  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.921271  314650 ssh_runner.go:195] Run: cat /version.json
	I0122 21:27:44.921303  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.924275  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924511  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924546  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.924566  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924759  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.924827  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.924871  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924995  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.925090  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.925199  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.925283  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.925319  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.925420  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.925532  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:45.025072  314650 ssh_runner.go:195] Run: systemctl --version
	I0122 21:27:45.032652  314650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:27:45.187726  314650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:27:45.194767  314650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:27:45.194851  314650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:27:45.213610  314650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:27:45.213644  314650 start.go:495] detecting cgroup driver to use...
	I0122 21:27:45.213723  314650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:27:45.231803  314650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:27:45.247682  314650 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:27:45.247801  314650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:27:45.263581  314650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:27:45.279536  314650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:27:45.406663  314650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:27:45.562297  314650 docker.go:233] disabling docker service ...
	I0122 21:27:45.562383  314650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:27:45.579904  314650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:27:45.595144  314650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:27:45.739957  314650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:27:45.866024  314650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:27:45.882728  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:27:45.907297  314650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0122 21:27:45.907388  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.920271  314650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:27:45.920341  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.933095  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.945711  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.958348  314650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:27:45.972409  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.989090  314650 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:46.011819  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:46.025229  314650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:27:46.038393  314650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:27:46.038475  314650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:27:46.055252  314650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:27:46.068173  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:46.196285  314650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:27:46.295821  314650 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:27:46.295921  314650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:27:46.301506  314650 start.go:563] Will wait 60s for crictl version
	I0122 21:27:46.301587  314650 ssh_runner.go:195] Run: which crictl
	I0122 21:27:46.306074  314650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:27:46.352624  314650 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:27:46.352727  314650 ssh_runner.go:195] Run: crio --version
	I0122 21:27:46.385398  314650 ssh_runner.go:195] Run: crio --version
	I0122 21:27:46.422040  314650 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0122 21:27:46.423591  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:46.426902  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:46.427305  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:46.427332  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:46.427679  314650 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0122 21:27:46.432609  314650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:27:46.448941  314650 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0122 21:27:46.450413  314650 kubeadm.go:883] updating cluster {Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:27:46.450575  314650 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:27:46.450683  314650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:27:46.496073  314650 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0122 21:27:46.496165  314650 ssh_runner.go:195] Run: which lz4
	I0122 21:27:46.500895  314650 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:27:46.505854  314650 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:27:46.505909  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0122 21:27:48.159588  314650 crio.go:462] duration metric: took 1.658732075s to copy over tarball
	I0122 21:27:48.159687  314650 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:27:50.643587  314650 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.483861806s)
	I0122 21:27:50.643623  314650 crio.go:469] duration metric: took 2.483996867s to extract the tarball
	I0122 21:27:50.643632  314650 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:27:50.683708  314650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:27:50.732147  314650 crio.go:514] all images are preloaded for cri-o runtime.
	I0122 21:27:50.732183  314650 cache_images.go:84] Images are preloaded, skipping loading
	I0122 21:27:50.732194  314650 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.32.1 crio true true} ...
	I0122 21:27:50.732350  314650 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-489789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:27:50.732425  314650 ssh_runner.go:195] Run: crio config
	I0122 21:27:50.789877  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:50.789904  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:50.789920  314650 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0122 21:27:50.789953  314650 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-489789 NodeName:newest-cni-489789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:27:50.790132  314650 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-489789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.146"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:27:50.790261  314650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:27:50.801652  314650 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:27:50.801742  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:27:50.813168  314650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0122 21:27:50.832707  314650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:27:50.852375  314650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0122 21:27:50.875185  314650 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I0122 21:27:50.879818  314650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:27:50.893992  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:51.040056  314650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:27:51.060681  314650 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789 for IP: 192.168.50.146
	I0122 21:27:51.060711  314650 certs.go:194] generating shared ca certs ...
	I0122 21:27:51.060737  314650 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:51.060940  314650 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:27:51.061018  314650 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:27:51.061036  314650 certs.go:256] generating profile certs ...
	I0122 21:27:51.061157  314650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/client.key
	I0122 21:27:51.061251  314650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.key.de28c3d3
	I0122 21:27:51.061317  314650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.key
	I0122 21:27:51.061482  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:27:51.061526  314650 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:27:51.061539  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:27:51.061572  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:27:51.061603  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:27:51.061636  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:27:51.061692  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:27:51.062633  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:27:51.098858  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:27:51.145243  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:27:51.180019  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:27:51.208916  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0122 21:27:51.237139  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:27:51.270858  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:27:51.306734  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:27:51.341424  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:27:51.370701  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:27:51.402552  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:27:51.431817  314650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:27:51.452816  314650 ssh_runner.go:195] Run: openssl version
	I0122 21:27:51.460223  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:27:51.474716  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.480785  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.480874  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.489093  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:27:51.501870  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:27:51.514659  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.520559  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.520713  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.527928  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:27:51.541856  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:27:51.555463  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.561295  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.561368  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.568531  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:27:51.584716  314650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:27:51.590762  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 21:27:51.598592  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 21:27:51.605666  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 21:27:51.613414  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 21:27:51.621894  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 21:27:51.629916  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 21:27:51.636995  314650 kubeadm.go:392] StartCluster: {Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:51.637138  314650 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:27:51.637358  314650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:27:51.691610  314650 cri.go:89] found id: ""
	I0122 21:27:51.691683  314650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:27:51.703943  314650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0122 21:27:51.703976  314650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0122 21:27:51.704044  314650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0122 21:27:51.715920  314650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0122 21:27:51.716767  314650 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-489789" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:51.717203  314650 kubeconfig.go:62] /home/jenkins/minikube-integration/20288-247142/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-489789" cluster setting kubeconfig missing "newest-cni-489789" context setting]
	I0122 21:27:51.717901  314650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:51.729230  314650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0122 21:27:51.741794  314650 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.146
	I0122 21:27:51.741842  314650 kubeadm.go:1160] stopping kube-system containers ...
	I0122 21:27:51.741859  314650 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0122 21:27:51.741927  314650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:27:51.789068  314650 cri.go:89] found id: ""
	I0122 21:27:51.789171  314650 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0122 21:27:51.809451  314650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:27:51.821492  314650 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:27:51.821515  314650 kubeadm.go:157] found existing configuration files:
	
	I0122 21:27:51.821564  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:27:51.833428  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:27:51.833507  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:27:51.845423  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:27:51.856151  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:27:51.856247  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:27:51.868260  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:27:51.879595  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:27:51.879671  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:27:51.892482  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:27:51.905485  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:27:51.905558  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:27:51.917498  314650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:27:51.930487  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:52.072199  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.069420  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.321398  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.393577  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.471920  314650 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:27:53.472027  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:53.972577  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.472481  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.972531  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.989674  314650 api_server.go:72] duration metric: took 1.517756303s to wait for apiserver process to appear ...
	I0122 21:27:54.989707  314650 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:27:54.989729  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.208473  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:27:57.208515  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:27:57.208536  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.292726  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:27:57.292780  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:27:57.490170  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.499620  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:57.499655  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:57.990312  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.998214  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:57.998257  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:58.489875  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:58.496876  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:58.496913  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:58.990600  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:58.995909  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 200:
	ok
	I0122 21:27:59.004894  314650 api_server.go:141] control plane version: v1.32.1
	I0122 21:27:59.004943  314650 api_server.go:131] duration metric: took 4.015227175s to wait for apiserver health ...
	I0122 21:27:59.004977  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:59.004987  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:59.006689  314650 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:27:59.008029  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:27:59.020070  314650 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:27:59.044659  314650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:27:59.055648  314650 system_pods.go:59] 8 kube-system pods found
	I0122 21:27:59.055702  314650 system_pods.go:61] "coredns-668d6bf9bc-j4plt" [148d05e6-8770-4af7-bdbe-cd5a5f8dd29f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:27:59.055713  314650 system_pods.go:61] "etcd-newest-cni-489789" [c8170cf7-3a96-44e4-b00e-18d85c1b7986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:27:59.055724  314650 system_pods.go:61] "kube-apiserver-newest-cni-489789" [6ffe2038-7158-4e18-b918-97456a0a041d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:27:59.055732  314650 system_pods.go:61] "kube-controller-manager-newest-cni-489789" [b725f80f-9d41-4128-8d21-fe71b2b2279e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:27:59.055738  314650 system_pods.go:61] "kube-proxy-ln878" [010174ac-4a25-4a32-bc4b-18e7f04b94c8] Running
	I0122 21:27:59.055754  314650 system_pods.go:61] "kube-scheduler-newest-cni-489789" [3b8995ec-114b-4e51-94bf-f38ec3c2a1fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:27:59.055766  314650 system_pods.go:61] "metrics-server-f79f97bbb-hwz7d" [93786d6e-095b-4543-9a36-eb57b54ab6b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:27:59.055774  314650 system_pods.go:61] "storage-provisioner" [9d443319-6b6b-446a-a3cb-242157e85a55] Running
	I0122 21:27:59.055788  314650 system_pods.go:74] duration metric: took 11.091605ms to wait for pod list to return data ...
	I0122 21:27:59.055802  314650 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:27:59.060105  314650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:27:59.060148  314650 node_conditions.go:123] node cpu capacity is 2
	I0122 21:27:59.060164  314650 node_conditions.go:105] duration metric: took 4.355866ms to run NodePressure ...
	I0122 21:27:59.060188  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:59.384018  314650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 21:27:59.398090  314650 ops.go:34] apiserver oom_adj: -16
	I0122 21:27:59.398128  314650 kubeadm.go:597] duration metric: took 7.694142189s to restartPrimaryControlPlane
	I0122 21:27:59.398142  314650 kubeadm.go:394] duration metric: took 7.761160632s to StartCluster
	I0122 21:27:59.398170  314650 settings.go:142] acquiring lock: {Name:mkd1753661c2351dd9318eb8eab12d9164b6fe23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:59.398290  314650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:59.400046  314650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:59.400419  314650 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 21:27:59.400556  314650 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0122 21:27:59.400665  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:59.400686  314650 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-489789"
	I0122 21:27:59.400707  314650 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-489789"
	W0122 21:27:59.400716  314650 addons.go:247] addon storage-provisioner should already be in state true
	I0122 21:27:59.400726  314650 addons.go:69] Setting default-storageclass=true in profile "newest-cni-489789"
	I0122 21:27:59.400741  314650 addons.go:69] Setting dashboard=true in profile "newest-cni-489789"
	I0122 21:27:59.400761  314650 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-489789"
	I0122 21:27:59.400768  314650 addons.go:238] Setting addon dashboard=true in "newest-cni-489789"
	W0122 21:27:59.400778  314650 addons.go:247] addon dashboard should already be in state true
	I0122 21:27:59.400815  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.400765  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.401204  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401204  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401235  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.401237  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401262  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.401321  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.400718  314650 addons.go:69] Setting metrics-server=true in profile "newest-cni-489789"
	I0122 21:27:59.401464  314650 addons.go:238] Setting addon metrics-server=true in "newest-cni-489789"
	W0122 21:27:59.401475  314650 addons.go:247] addon metrics-server should already be in state true
	I0122 21:27:59.401509  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.401887  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401975  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.402025  314650 out.go:177] * Verifying Kubernetes components...
	I0122 21:27:59.403359  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:59.420697  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0122 21:27:59.420697  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0122 21:27:59.421021  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0122 21:27:59.421349  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.421459  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.421547  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.422098  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422121  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422122  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422144  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422325  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422349  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422401  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
	I0122 21:27:59.423146  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423151  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423148  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423359  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.423430  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.423817  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.423841  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.423816  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.423882  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.424405  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.425054  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.425105  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.425288  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.425335  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.427261  314650 addons.go:238] Setting addon default-storageclass=true in "newest-cni-489789"
	W0122 21:27:59.427282  314650 addons.go:247] addon default-storageclass should already be in state true
	I0122 21:27:59.427316  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.427674  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.427723  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.446713  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43103
	I0122 21:27:59.446783  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I0122 21:27:59.451272  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.451373  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.451946  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.451969  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.452101  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.452121  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.452538  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.452791  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.452801  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.453414  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.455400  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.455881  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.457716  314650 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0122 21:27:59.457751  314650 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0122 21:27:59.459475  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0122 21:27:59.459504  314650 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0122 21:27:59.459539  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.460864  314650 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0122 21:27:59.462275  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0122 21:27:59.462311  314650 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0122 21:27:59.462354  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.466673  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.467509  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.467541  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.467851  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.468096  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.468288  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.468589  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.468600  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.469258  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.469308  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.469497  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.469679  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.469875  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.470056  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.473781  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0122 21:27:59.473966  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39141
	I0122 21:27:59.474357  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.474615  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.474910  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.474936  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.475242  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.475262  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.475362  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.475908  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.475957  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.476056  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.476285  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.478535  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.480540  314650 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:27:59.481982  314650 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:27:59.482013  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 21:27:59.482045  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.485683  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.486142  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.486177  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.486465  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.486710  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.486889  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.487038  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.494246  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0122 21:27:59.494801  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.495426  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.495453  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.495905  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.496130  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.498296  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.498565  314650 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 21:27:59.498586  314650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 21:27:59.498611  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.501861  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.502313  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.502346  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.502646  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.502865  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.503077  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.503233  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.724824  314650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:27:59.770671  314650 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:27:59.770782  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:59.794707  314650 api_server.go:72] duration metric: took 394.235725ms to wait for apiserver process to appear ...
	I0122 21:27:59.794739  314650 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:27:59.794764  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:59.830916  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 200:
	ok
	I0122 21:27:59.833823  314650 api_server.go:141] control plane version: v1.32.1
	I0122 21:27:59.833866  314650 api_server.go:131] duration metric: took 39.117571ms to wait for apiserver health ...
	I0122 21:27:59.833879  314650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:27:59.842548  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:27:59.866014  314650 system_pods.go:59] 8 kube-system pods found
	I0122 21:27:59.866078  314650 system_pods.go:61] "coredns-668d6bf9bc-j4plt" [148d05e6-8770-4af7-bdbe-cd5a5f8dd29f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:27:59.866091  314650 system_pods.go:61] "etcd-newest-cni-489789" [c8170cf7-3a96-44e4-b00e-18d85c1b7986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:27:59.866103  314650 system_pods.go:61] "kube-apiserver-newest-cni-489789" [6ffe2038-7158-4e18-b918-97456a0a041d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:27:59.866113  314650 system_pods.go:61] "kube-controller-manager-newest-cni-489789" [b725f80f-9d41-4128-8d21-fe71b2b2279e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:27:59.866119  314650 system_pods.go:61] "kube-proxy-ln878" [010174ac-4a25-4a32-bc4b-18e7f04b94c8] Running
	I0122 21:27:59.866128  314650 system_pods.go:61] "kube-scheduler-newest-cni-489789" [3b8995ec-114b-4e51-94bf-f38ec3c2a1fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:27:59.866137  314650 system_pods.go:61] "metrics-server-f79f97bbb-hwz7d" [93786d6e-095b-4543-9a36-eb57b54ab6b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:27:59.866143  314650 system_pods.go:61] "storage-provisioner" [9d443319-6b6b-446a-a3cb-242157e85a55] Running
	I0122 21:27:59.866152  314650 system_pods.go:74] duration metric: took 32.265403ms to wait for pod list to return data ...
	I0122 21:27:59.866168  314650 default_sa.go:34] waiting for default service account to be created ...
	I0122 21:27:59.871064  314650 default_sa.go:45] found service account: "default"
	I0122 21:27:59.871106  314650 default_sa.go:55] duration metric: took 4.928382ms for default service account to be created ...
	I0122 21:27:59.871125  314650 kubeadm.go:582] duration metric: took 470.664674ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0122 21:27:59.871157  314650 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:27:59.875089  314650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:27:59.875125  314650 node_conditions.go:123] node cpu capacity is 2
	I0122 21:27:59.875139  314650 node_conditions.go:105] duration metric: took 3.96814ms to run NodePressure ...
	I0122 21:27:59.875155  314650 start.go:241] waiting for startup goroutines ...
	I0122 21:27:59.879100  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 21:27:59.991147  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0122 21:27:59.991183  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0122 21:28:00.010416  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0122 21:28:00.010448  314650 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0122 21:28:00.034463  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0122 21:28:00.034502  314650 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0122 21:28:00.066923  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:28:00.066963  314650 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0122 21:28:00.112671  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:28:00.155556  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0122 21:28:00.155594  314650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0122 21:28:00.224676  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0122 21:28:00.224717  314650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0122 21:28:00.402769  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0122 21:28:00.402799  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0122 21:28:00.611017  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0122 21:28:00.611060  314650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0122 21:28:00.746957  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0122 21:28:00.747012  314650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0122 21:28:00.817833  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0122 21:28:00.817864  314650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0122 21:28:00.905629  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0122 21:28:00.905658  314650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0122 21:28:00.973450  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:28:00.973488  314650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0122 21:28:01.033649  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:28:01.902642  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.023480792s)
	I0122 21:28:01.902735  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.902750  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.902850  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.060261694s)
	I0122 21:28:01.902903  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.902915  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.904921  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.904989  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.904996  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905018  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.905027  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.905036  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.905033  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905093  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.905102  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.905104  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905492  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905513  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905534  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.905540  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905567  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905581  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.914609  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.914638  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.914975  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.915021  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.915036  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.003384  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.890658634s)
	I0122 21:28:02.003466  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.003495  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.003851  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:02.003914  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.003943  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.003952  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.003960  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.004229  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.004247  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.004261  314650 addons.go:479] Verifying addon metrics-server=true in "newest-cni-489789"
	I0122 21:28:02.891241  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.857486932s)
	I0122 21:28:02.891533  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.891588  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.894087  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:02.894100  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.894130  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.894140  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.894149  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.894509  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.894564  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.896533  314650 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-489789 addons enable metrics-server
	
	I0122 21:28:02.898219  314650 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0122 21:28:02.900518  314650 addons.go:514] duration metric: took 3.499959979s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0122 21:28:02.900586  314650 start.go:246] waiting for cluster config update ...
	I0122 21:28:02.900604  314650 start.go:255] writing updated cluster config ...
	I0122 21:28:02.900904  314650 ssh_runner.go:195] Run: rm -f paused
	I0122 21:28:02.965147  314650 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0122 21:28:02.967085  314650 out.go:177] * Done! kubectl is now configured to use "newest-cni-489789" cluster and "default" namespace by default
	I0122 21:29:27.087272  312675 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:29:27.087393  312675 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:29:27.089567  312675 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:29:27.089666  312675 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:29:27.089781  312675 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:29:27.089958  312675 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:29:27.090084  312675 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:29:27.090165  312675 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:29:27.092167  312675 out.go:235]   - Generating certificates and keys ...
	I0122 21:29:27.092283  312675 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:29:27.092358  312675 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:29:27.092462  312675 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:29:27.092535  312675 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:29:27.092611  312675 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:29:27.092682  312675 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:29:27.092771  312675 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:29:27.092848  312675 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:29:27.092976  312675 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:29:27.093094  312675 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:29:27.093166  312675 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:29:27.093261  312675 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:29:27.093350  312675 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:29:27.093398  312675 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:29:27.093476  312675 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:29:27.093559  312675 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:29:27.093650  312675 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:29:27.093720  312675 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:29:27.093761  312675 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:29:27.093818  312675 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:29:27.095338  312675 out.go:235]   - Booting up control plane ...
	I0122 21:29:27.095468  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:29:27.095555  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:29:27.095632  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:29:27.095710  312675 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:29:27.095838  312675 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:29:27.095878  312675 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:29:27.095937  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096106  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096195  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096453  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096565  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096796  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096867  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.097090  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.097177  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.097367  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.097386  312675 kubeadm.go:310] 
	I0122 21:29:27.097443  312675 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:29:27.097512  312675 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:29:27.097527  312675 kubeadm.go:310] 
	I0122 21:29:27.097557  312675 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:29:27.097611  312675 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:29:27.097761  312675 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:29:27.097783  312675 kubeadm.go:310] 
	I0122 21:29:27.097878  312675 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:29:27.097928  312675 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:29:27.097955  312675 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:29:27.097962  312675 kubeadm.go:310] 
	I0122 21:29:27.098055  312675 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:29:27.098120  312675 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:29:27.098127  312675 kubeadm.go:310] 
	I0122 21:29:27.098272  312675 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:29:27.098357  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:29:27.098434  312675 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:29:27.098533  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:29:27.098585  312675 kubeadm.go:310] 
	W0122 21:29:27.098687  312675 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0122 21:29:27.098731  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:29:27.599261  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:29:27.617267  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:29:27.629164  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:29:27.629190  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:29:27.629255  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:29:27.641001  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:29:27.641072  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:29:27.653446  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:29:27.666334  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:29:27.666426  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:29:27.678551  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:29:27.689687  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:29:27.689757  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:29:27.702030  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:29:27.713507  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:29:27.713585  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:29:27.726067  312675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:29:27.816417  312675 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:29:27.816555  312675 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:29:27.995432  312675 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:29:27.995599  312675 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:29:27.995745  312675 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:29:28.218104  312675 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:29:28.220056  312675 out.go:235]   - Generating certificates and keys ...
	I0122 21:29:28.220190  312675 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:29:28.220278  312675 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:29:28.220383  312675 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:29:28.220486  312675 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:29:28.220573  312675 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:29:28.220648  312675 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:29:28.220880  312675 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:29:28.221175  312675 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:29:28.222058  312675 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:29:28.222351  312675 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:29:28.222442  312675 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:29:28.222530  312675 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:29:28.304455  312675 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:29:28.572192  312675 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:29:28.869356  312675 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:29:29.053609  312675 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:29:29.082264  312675 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:29:29.082429  312675 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:29:29.082503  312675 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:29:29.253931  312675 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:29:29.256894  312675 out.go:235]   - Booting up control plane ...
	I0122 21:29:29.257044  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:29:29.267513  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:29:29.269154  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:29:29.270276  312675 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:29:29.274228  312675 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:30:09.277116  312675 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:30:09.277238  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:09.277504  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:14.278173  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:14.278454  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:24.278945  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:24.279149  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:44.279492  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:44.279715  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:31:24.278351  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:31:24.278612  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:31:24.278628  312675 kubeadm.go:310] 
	I0122 21:31:24.278664  312675 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:31:24.278723  312675 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:31:24.278735  312675 kubeadm.go:310] 
	I0122 21:31:24.278775  312675 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:31:24.278827  312675 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:31:24.278956  312675 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:31:24.278981  312675 kubeadm.go:310] 
	I0122 21:31:24.279066  312675 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:31:24.279109  312675 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:31:24.279140  312675 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:31:24.279147  312675 kubeadm.go:310] 
	I0122 21:31:24.279253  312675 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:31:24.279353  312675 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:31:24.279373  312675 kubeadm.go:310] 
	I0122 21:31:24.279516  312675 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:31:24.279639  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:31:24.279754  312675 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:31:24.279837  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:31:24.279895  312675 kubeadm.go:310] 
	I0122 21:31:24.280842  312675 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:31:24.280984  312675 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:31:24.281074  312675 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:31:24.281148  312675 kubeadm.go:394] duration metric: took 7m59.138107768s to StartCluster
	I0122 21:31:24.281220  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:31:24.281302  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:31:24.331184  312675 cri.go:89] found id: ""
	I0122 21:31:24.331225  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.331235  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:31:24.331242  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:31:24.331309  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:31:24.372934  312675 cri.go:89] found id: ""
	I0122 21:31:24.372963  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.372972  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:31:24.372979  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:31:24.373034  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:31:24.413239  312675 cri.go:89] found id: ""
	I0122 21:31:24.413274  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.413284  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:31:24.413290  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:31:24.413347  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:31:24.452513  312675 cri.go:89] found id: ""
	I0122 21:31:24.452552  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.452564  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:31:24.452573  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:31:24.452644  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:31:24.491580  312675 cri.go:89] found id: ""
	I0122 21:31:24.491617  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.491629  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:31:24.491637  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:31:24.491710  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:31:24.544823  312675 cri.go:89] found id: ""
	I0122 21:31:24.544856  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.544865  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:31:24.544872  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:31:24.544930  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:31:24.585047  312675 cri.go:89] found id: ""
	I0122 21:31:24.585085  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.585099  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:31:24.585108  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:31:24.585175  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:31:24.624152  312675 cri.go:89] found id: ""
	I0122 21:31:24.624189  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.624201  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:31:24.624216  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:31:24.624231  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:31:24.717945  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:31:24.717971  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:31:24.717989  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:31:24.826216  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:31:24.826260  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:31:24.878403  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:31:24.878439  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:31:24.931058  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:31:24.931102  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0122 21:31:24.947080  312675 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0122 21:31:24.947171  312675 out.go:270] * 
	W0122 21:31:24.947310  312675 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:31:24.947331  312675 out.go:270] * 
	W0122 21:31:24.948119  312675 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 21:31:24.951080  312675 out.go:201] 
	W0122 21:31:24.952375  312675 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:31:24.952433  312675 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0122 21:31:24.952459  312675 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0122 21:31:24.954056  312675 out.go:201] 
	
	
	==> CRI-O <==
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.615796104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582441615767916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e28cffe2-7f3c-4b0f-ab18-0f3f98e2e185 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.616410773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21318ae1-a8af-49bb-b075-6ac9dafca053 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.616468987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21318ae1-a8af-49bb-b075-6ac9dafca053 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.619872254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90174c5766140563b56c1a7c41b6f2a5c95774d20328b85059ad7ab5a71d57d3,PodSandboxId:58cc0e4851b941df6cbbe4144838057e283fca14803efcb9f2bfdd5239bb7f55,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737582138753791589,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-sq7fp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4a3b2038-23f8-46b3-9e9f-fa0ccca814f1,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de73b2a9abdc5bd9cffbee6c0e343ebc031df02be0420a55fc4e10201d77cffb,PodSandboxId:dfeb75b0d2b8e1f1ed462187804411f24bcdbaa9fcc6c3202302dec3af529947,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737581180122702852,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-59wcn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 8127cc66-08c1-429d-84e5-6014bb5e8a42,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:863452ff80df2438acf39867aa600d65da0ea893243277d18415696652b54d51,PodSandboxId:1149ccb29b7780e7d1b9861e55e0c0f93ab25d395de0792edb7406aed5a63f1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737581171391202068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b817f35-8247-4f27-8456-829445acac1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245c498f59308230283abe60d7369e5ac1c458d53f68b660bfdfd6ee9bd1c545,PodSandboxId:a9921fcb08a029550b04c8d8c0366219e99483df9364055e7f31bddec5aac05c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581170748270698,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-t7m8w,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 6eab222c-ae91-4937-85a7-8ebe42d731a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d4d369bc9f2384d0246785e455e4be500fab65b6e76fe25e581065432d9219,PodSandboxId:1921ed361a8d7acb6ed500869ddb198a82db1e450925621f73b5e6e0d7923339,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581170296761960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-n5dr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a12f3179-6eca-4383-b99b-36acf5a5fc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ae3ec7491f9782d09da039feaf850ba192127bcd30f828b052b1e9f70fb947,PodSandboxId:889a5187ec5b190d15cdc490445da5a90f6b3810afdf62c3201bfbe25f90d6ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737581169289149859,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-22v8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d04c08-85f0-4b37-b855-5de9a1b827ed,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3b269f19d485671183d91b45632cbb1e3d1a30e05b52b9fd5ff7ee391a5279,PodSandboxId:b6db5c71e2fdc066cc46be897da7987dd9b8ce7deccfc471ed247b6847192112,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da0
55b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737581156631972649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ba0bbc060668759d3ee2e383c10a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3ca7b513ef3ddd2627a130eeeca7fed371a5b9ea66930536dce0cce7d6b7c6,PodSandboxId:a624d126e839a0073ac162604a0686b1b8aef261fb9c006be6272ae075666609,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4d
d44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737581156620429174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967e137a3d9168a56046a99a15450328,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a8f9cb59c37f82e9b5d2415644431f5199762a1a6621b94c7057bcfbb80916,PodSandboxId:63071a5c9b3a9ac7ad9a73ba275aa6eb2b5e05bb0db53aed8d0ca6649b5642b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113
e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737581156544433117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6be1df545ee97435548dbf7fa9d4b97,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31eb5478289347b893f247657efaa4a0d2da0a3ad7199c3ec4952b692251fdaa,PodSandboxId:4dfed869285a4a8063f1cd95b0037f3ffc74c5110ceff7a5611e92dc2a054d11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f
35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737581156466882998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96a63daf9fe3c7ead17d25952493bb78,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b174a446de30ecae09e581c181023dc1ae87ff35d35d30f1b59db2fb8a67e7e0,PodSandboxId:0b9f7c39bd4de5037d0acf00116f5ceb12caf999067af09bd8b2492a046a2ffc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737580867390705969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967e137a3d9168a56046a99a15450328,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21318ae1-a8af-49bb-b075-6ac9dafca053 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.667412070Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b312de51-033a-4e33-845a-b633dfc44436 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.667492132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b312de51-033a-4e33-845a-b633dfc44436 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.669205520Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de3a7cb9-556f-42ff-af77-6f170e8c0509 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.669601920Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582441669575488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de3a7cb9-556f-42ff-af77-6f170e8c0509 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.670313278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80ea0d1c-1197-4074-82f5-0497a9e64c4e name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.670397117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80ea0d1c-1197-4074-82f5-0497a9e64c4e name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.670827989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90174c5766140563b56c1a7c41b6f2a5c95774d20328b85059ad7ab5a71d57d3,PodSandboxId:58cc0e4851b941df6cbbe4144838057e283fca14803efcb9f2bfdd5239bb7f55,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737582138753791589,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-sq7fp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4a3b2038-23f8-46b3-9e9f-fa0ccca814f1,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de73b2a9abdc5bd9cffbee6c0e343ebc031df02be0420a55fc4e10201d77cffb,PodSandboxId:dfeb75b0d2b8e1f1ed462187804411f24bcdbaa9fcc6c3202302dec3af529947,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737581180122702852,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-59wcn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 8127cc66-08c1-429d-84e5-6014bb5e8a42,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:863452ff80df2438acf39867aa600d65da0ea893243277d18415696652b54d51,PodSandboxId:1149ccb29b7780e7d1b9861e55e0c0f93ab25d395de0792edb7406aed5a63f1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737581171391202068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b817f35-8247-4f27-8456-829445acac1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245c498f59308230283abe60d7369e5ac1c458d53f68b660bfdfd6ee9bd1c545,PodSandboxId:a9921fcb08a029550b04c8d8c0366219e99483df9364055e7f31bddec5aac05c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581170748270698,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-t7m8w,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 6eab222c-ae91-4937-85a7-8ebe42d731a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d4d369bc9f2384d0246785e455e4be500fab65b6e76fe25e581065432d9219,PodSandboxId:1921ed361a8d7acb6ed500869ddb198a82db1e450925621f73b5e6e0d7923339,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581170296761960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-n5dr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a12f3179-6eca-4383-b99b-36acf5a5fc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ae3ec7491f9782d09da039feaf850ba192127bcd30f828b052b1e9f70fb947,PodSandboxId:889a5187ec5b190d15cdc490445da5a90f6b3810afdf62c3201bfbe25f90d6ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737581169289149859,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-22v8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d04c08-85f0-4b37-b855-5de9a1b827ed,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3b269f19d485671183d91b45632cbb1e3d1a30e05b52b9fd5ff7ee391a5279,PodSandboxId:b6db5c71e2fdc066cc46be897da7987dd9b8ce7deccfc471ed247b6847192112,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da0
55b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737581156631972649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ba0bbc060668759d3ee2e383c10a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3ca7b513ef3ddd2627a130eeeca7fed371a5b9ea66930536dce0cce7d6b7c6,PodSandboxId:a624d126e839a0073ac162604a0686b1b8aef261fb9c006be6272ae075666609,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4d
d44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737581156620429174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967e137a3d9168a56046a99a15450328,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a8f9cb59c37f82e9b5d2415644431f5199762a1a6621b94c7057bcfbb80916,PodSandboxId:63071a5c9b3a9ac7ad9a73ba275aa6eb2b5e05bb0db53aed8d0ca6649b5642b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113
e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737581156544433117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6be1df545ee97435548dbf7fa9d4b97,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31eb5478289347b893f247657efaa4a0d2da0a3ad7199c3ec4952b692251fdaa,PodSandboxId:4dfed869285a4a8063f1cd95b0037f3ffc74c5110ceff7a5611e92dc2a054d11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f
35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737581156466882998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96a63daf9fe3c7ead17d25952493bb78,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b174a446de30ecae09e581c181023dc1ae87ff35d35d30f1b59db2fb8a67e7e0,PodSandboxId:0b9f7c39bd4de5037d0acf00116f5ceb12caf999067af09bd8b2492a046a2ffc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737580867390705969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967e137a3d9168a56046a99a15450328,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80ea0d1c-1197-4074-82f5-0497a9e64c4e name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.711647053Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df497a26-f9f3-4a1e-a249-29516ff5e01a name=/runtime.v1.RuntimeService/Version
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.711750514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df497a26-f9f3-4a1e-a249-29516ff5e01a name=/runtime.v1.RuntimeService/Version
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.712985388Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b273a026-19dc-4ae2-902a-8cb998c04042 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.713531449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582441713503589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b273a026-19dc-4ae2-902a-8cb998c04042 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.714298263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=103204a4-f0ac-48f5-b9d6-02d71a80f92b name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.714379020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=103204a4-f0ac-48f5-b9d6-02d71a80f92b name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.714691606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90174c5766140563b56c1a7c41b6f2a5c95774d20328b85059ad7ab5a71d57d3,PodSandboxId:58cc0e4851b941df6cbbe4144838057e283fca14803efcb9f2bfdd5239bb7f55,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737582138753791589,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-sq7fp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4a3b2038-23f8-46b3-9e9f-fa0ccca814f1,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de73b2a9abdc5bd9cffbee6c0e343ebc031df02be0420a55fc4e10201d77cffb,PodSandboxId:dfeb75b0d2b8e1f1ed462187804411f24bcdbaa9fcc6c3202302dec3af529947,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737581180122702852,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-59wcn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 8127cc66-08c1-429d-84e5-6014bb5e8a42,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:863452ff80df2438acf39867aa600d65da0ea893243277d18415696652b54d51,PodSandboxId:1149ccb29b7780e7d1b9861e55e0c0f93ab25d395de0792edb7406aed5a63f1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737581171391202068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b817f35-8247-4f27-8456-829445acac1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245c498f59308230283abe60d7369e5ac1c458d53f68b660bfdfd6ee9bd1c545,PodSandboxId:a9921fcb08a029550b04c8d8c0366219e99483df9364055e7f31bddec5aac05c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581170748270698,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-t7m8w,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 6eab222c-ae91-4937-85a7-8ebe42d731a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d4d369bc9f2384d0246785e455e4be500fab65b6e76fe25e581065432d9219,PodSandboxId:1921ed361a8d7acb6ed500869ddb198a82db1e450925621f73b5e6e0d7923339,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581170296761960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-n5dr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a12f3179-6eca-4383-b99b-36acf5a5fc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ae3ec7491f9782d09da039feaf850ba192127bcd30f828b052b1e9f70fb947,PodSandboxId:889a5187ec5b190d15cdc490445da5a90f6b3810afdf62c3201bfbe25f90d6ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737581169289149859,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-22v8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d04c08-85f0-4b37-b855-5de9a1b827ed,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3b269f19d485671183d91b45632cbb1e3d1a30e05b52b9fd5ff7ee391a5279,PodSandboxId:b6db5c71e2fdc066cc46be897da7987dd9b8ce7deccfc471ed247b6847192112,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da0
55b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737581156631972649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ba0bbc060668759d3ee2e383c10a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3ca7b513ef3ddd2627a130eeeca7fed371a5b9ea66930536dce0cce7d6b7c6,PodSandboxId:a624d126e839a0073ac162604a0686b1b8aef261fb9c006be6272ae075666609,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4d
d44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737581156620429174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967e137a3d9168a56046a99a15450328,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a8f9cb59c37f82e9b5d2415644431f5199762a1a6621b94c7057bcfbb80916,PodSandboxId:63071a5c9b3a9ac7ad9a73ba275aa6eb2b5e05bb0db53aed8d0ca6649b5642b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113
e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737581156544433117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6be1df545ee97435548dbf7fa9d4b97,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31eb5478289347b893f247657efaa4a0d2da0a3ad7199c3ec4952b692251fdaa,PodSandboxId:4dfed869285a4a8063f1cd95b0037f3ffc74c5110ceff7a5611e92dc2a054d11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f
35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737581156466882998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96a63daf9fe3c7ead17d25952493bb78,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b174a446de30ecae09e581c181023dc1ae87ff35d35d30f1b59db2fb8a67e7e0,PodSandboxId:0b9f7c39bd4de5037d0acf00116f5ceb12caf999067af09bd8b2492a046a2ffc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737580867390705969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967e137a3d9168a56046a99a15450328,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=103204a4-f0ac-48f5-b9d6-02d71a80f92b name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.758246512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f47ca24-f1a8-46e3-9f2b-ec7ceb5025a5 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.758324789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f47ca24-f1a8-46e3-9f2b-ec7ceb5025a5 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.760277845Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c1e4126-ca4b-4e4a-8f1d-335476bf82ae name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.760824528Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582441760791499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c1e4126-ca4b-4e4a-8f1d-335476bf82ae name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.761729862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df43fa7a-cb3f-4559-a997-5a4bd1097ff6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.761791262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df43fa7a-cb3f-4559-a997-5a4bd1097ff6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:47:21 no-preload-806477 crio[730]: time="2025-01-22 21:47:21.762125541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90174c5766140563b56c1a7c41b6f2a5c95774d20328b85059ad7ab5a71d57d3,PodSandboxId:58cc0e4851b941df6cbbe4144838057e283fca14803efcb9f2bfdd5239bb7f55,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737582138753791589,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-sq7fp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4a3b2038-23f8-46b3-9e9f-fa0ccca814f1,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de73b2a9abdc5bd9cffbee6c0e343ebc031df02be0420a55fc4e10201d77cffb,PodSandboxId:dfeb75b0d2b8e1f1ed462187804411f24bcdbaa9fcc6c3202302dec3af529947,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737581180122702852,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-59wcn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 8127cc66-08c1-429d-84e5-6014bb5e8a42,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:863452ff80df2438acf39867aa600d65da0ea893243277d18415696652b54d51,PodSandboxId:1149ccb29b7780e7d1b9861e55e0c0f93ab25d395de0792edb7406aed5a63f1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737581171391202068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b817f35-8247-4f27-8456-829445acac1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245c498f59308230283abe60d7369e5ac1c458d53f68b660bfdfd6ee9bd1c545,PodSandboxId:a9921fcb08a029550b04c8d8c0366219e99483df9364055e7f31bddec5aac05c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581170748270698,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-t7m8w,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 6eab222c-ae91-4937-85a7-8ebe42d731a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d4d369bc9f2384d0246785e455e4be500fab65b6e76fe25e581065432d9219,PodSandboxId:1921ed361a8d7acb6ed500869ddb198a82db1e450925621f73b5e6e0d7923339,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581170296761960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-n5dr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a12f3179-6eca-4383-b99b-36acf5a5fc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ae3ec7491f9782d09da039feaf850ba192127bcd30f828b052b1e9f70fb947,PodSandboxId:889a5187ec5b190d15cdc490445da5a90f6b3810afdf62c3201bfbe25f90d6ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737581169289149859,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-22v8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d04c08-85f0-4b37-b855-5de9a1b827ed,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3b269f19d485671183d91b45632cbb1e3d1a30e05b52b9fd5ff7ee391a5279,PodSandboxId:b6db5c71e2fdc066cc46be897da7987dd9b8ce7deccfc471ed247b6847192112,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da0
55b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737581156631972649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ba0bbc060668759d3ee2e383c10a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3ca7b513ef3ddd2627a130eeeca7fed371a5b9ea66930536dce0cce7d6b7c6,PodSandboxId:a624d126e839a0073ac162604a0686b1b8aef261fb9c006be6272ae075666609,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4d
d44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737581156620429174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967e137a3d9168a56046a99a15450328,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a8f9cb59c37f82e9b5d2415644431f5199762a1a6621b94c7057bcfbb80916,PodSandboxId:63071a5c9b3a9ac7ad9a73ba275aa6eb2b5e05bb0db53aed8d0ca6649b5642b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113
e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737581156544433117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6be1df545ee97435548dbf7fa9d4b97,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31eb5478289347b893f247657efaa4a0d2da0a3ad7199c3ec4952b692251fdaa,PodSandboxId:4dfed869285a4a8063f1cd95b0037f3ffc74c5110ceff7a5611e92dc2a054d11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f
35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737581156466882998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96a63daf9fe3c7ead17d25952493bb78,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b174a446de30ecae09e581c181023dc1ae87ff35d35d30f1b59db2fb8a67e7e0,PodSandboxId:0b9f7c39bd4de5037d0acf00116f5ceb12caf999067af09bd8b2492a046a2ffc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737580867390705969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-806477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967e137a3d9168a56046a99a15450328,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df43fa7a-cb3f-4559-a997-5a4bd1097ff6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	90174c5766140       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 minutes ago       Exited              dashboard-metrics-scraper   8                   58cc0e4851b94       dashboard-metrics-scraper-86c6bf9756-sq7fp
	de73b2a9abdc5       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   21 minutes ago      Running             kubernetes-dashboard        0                   dfeb75b0d2b8e       kubernetes-dashboard-7779f9b69b-59wcn
	863452ff80df2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   1149ccb29b778       storage-provisioner
	245c498f59308       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   a9921fcb08a02       coredns-668d6bf9bc-t7m8w
	41d4d369bc9f2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   1921ed361a8d7       coredns-668d6bf9bc-n5dr4
	28ae3ec7491f9       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           21 minutes ago      Running             kube-proxy                  0                   889a5187ec5b1       kube-proxy-22v8c
	bc3b269f19d48       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           21 minutes ago      Running             kube-controller-manager     2                   b6db5c71e2fdc       kube-controller-manager-no-preload-806477
	8a3ca7b513ef3       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           21 minutes ago      Running             kube-apiserver              2                   a624d126e839a       kube-apiserver-no-preload-806477
	b5a8f9cb59c37       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           21 minutes ago      Running             kube-scheduler              2                   63071a5c9b3a9       kube-scheduler-no-preload-806477
	31eb547828934       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   4dfed869285a4       etcd-no-preload-806477
	b174a446de30e       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           26 minutes ago      Exited              kube-apiserver              1                   0b9f7c39bd4de       kube-apiserver-no-preload-806477
	
	
	==> coredns [245c498f59308230283abe60d7369e5ac1c458d53f68b660bfdfd6ee9bd1c545] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [41d4d369bc9f2384d0246785e455e4be500fab65b6e76fe25e581065432d9219] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-806477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-806477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4
	                    minikube.k8s.io/name=no-preload-806477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_22T21_26_03_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 Jan 2025 21:25:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-806477
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 Jan 2025 21:47:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 Jan 2025 21:45:04 +0000   Wed, 22 Jan 2025 21:25:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 Jan 2025 21:45:04 +0000   Wed, 22 Jan 2025 21:25:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 Jan 2025 21:45:04 +0000   Wed, 22 Jan 2025 21:25:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 Jan 2025 21:45:04 +0000   Wed, 22 Jan 2025 21:26:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    no-preload-806477
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c3bddedcc024533ab498513a31f950f
	  System UUID:                3c3bdded-cc02-4533-ab49-8513a31f950f
	  Boot ID:                    cac90b48-161a-4916-bb3e-3333ead4f0ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-n5dr4                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-t7m8w                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-806477                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-806477              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-806477     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-22v8c                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-806477              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-wnc4r                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-sq7fp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-59wcn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node no-preload-806477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node no-preload-806477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node no-preload-806477 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node no-preload-806477 event: Registered Node no-preload-806477 in Controller
	
	
	==> dmesg <==
	[  +0.043935] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.082546] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.214772] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.687107] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.369612] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.060324] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065916] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.175835] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.164670] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +0.316156] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[Jan22 21:21] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.071120] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.430715] systemd-fstab-generator[1446]: Ignoring "noauto" option for root device
	[  +5.705040] kauditd_printk_skb: 100 callbacks suppressed
	[  +7.040832] kauditd_printk_skb: 90 callbacks suppressed
	[Jan22 21:25] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.281632] systemd-fstab-generator[3161]: Ignoring "noauto" option for root device
	[  +4.539852] kauditd_printk_skb: 58 callbacks suppressed
	[Jan22 21:26] systemd-fstab-generator[3498]: Ignoring "noauto" option for root device
	[  +6.050814] systemd-fstab-generator[3624]: Ignoring "noauto" option for root device
	[  +0.122598] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.056781] kauditd_printk_skb: 110 callbacks suppressed
	[ +30.124577] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [31eb5478289347b893f247657efaa4a0d2da0a3ad7199c3ec4952b692251fdaa] <==
	{"level":"info","ts":"2025-01-22T21:27:18.014390Z","caller":"traceutil/trace.go:171","msg":"trace[429338216] transaction","detail":"{read_only:false; response_revision:592; number_of_response:1; }","duration":"123.999925ms","start":"2025-01-22T21:27:17.890363Z","end":"2025-01-22T21:27:18.014363Z","steps":["trace[429338216] 'process raft request'  (duration: 116.732604ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:27:18.156711Z","caller":"traceutil/trace.go:171","msg":"trace[1076141633] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"102.759778ms","start":"2025-01-22T21:27:18.053934Z","end":"2025-01-22T21:27:18.156693Z","steps":["trace[1076141633] 'process raft request'  (duration: 87.652236ms)","trace[1076141633] 'compare'  (duration: 15.027584ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-22T21:27:52.229356Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.836872ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4399617231787004815 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.10\" mod_revision:624 > success:<request_put:<key:\"/registry/masterleases/192.168.39.10\" value_size:66 lease:4399617231787004813 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.10\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-22T21:27:52.229812Z","caller":"traceutil/trace.go:171","msg":"trace[377257827] linearizableReadLoop","detail":"{readStateIndex:666; appliedIndex:665; }","duration":"218.731367ms","start":"2025-01-22T21:27:52.011052Z","end":"2025-01-22T21:27:52.229783Z","steps":["trace[377257827] 'read index received'  (duration: 24.912283ms)","trace[377257827] 'applied index is now lower than readState.Index'  (duration: 193.817669ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-22T21:27:52.229843Z","caller":"traceutil/trace.go:171","msg":"trace[183350695] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"251.626213ms","start":"2025-01-22T21:27:51.978204Z","end":"2025-01-22T21:27:52.229830Z","steps":["trace[183350695] 'process raft request'  (duration: 57.864349ms)","trace[183350695] 'compare'  (duration: 190.624565ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-22T21:27:52.229973Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.914216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:27:52.230101Z","caller":"traceutil/trace.go:171","msg":"trace[505898000] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:633; }","duration":"219.123412ms","start":"2025-01-22T21:27:52.010961Z","end":"2025-01-22T21:27:52.230085Z","steps":["trace[505898000] 'agreement among raft nodes before linearized reading'  (duration: 218.922277ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T21:27:52.533237Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.490859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:27:52.533383Z","caller":"traceutil/trace.go:171","msg":"trace[319631212] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:633; }","duration":"121.671049ms","start":"2025-01-22T21:27:52.411695Z","end":"2025-01-22T21:27:52.533366Z","steps":["trace[319631212] 'range keys from in-memory index tree'  (duration: 121.439886ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:27:52.879284Z","caller":"traceutil/trace.go:171","msg":"trace[1447007228] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"135.579914ms","start":"2025-01-22T21:27:52.743677Z","end":"2025-01-22T21:27:52.879257Z","steps":["trace[1447007228] 'process raft request'  (duration: 135.44839ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:27:52.915257Z","caller":"traceutil/trace.go:171","msg":"trace[1333166677] linearizableReadLoop","detail":"{readStateIndex:668; appliedIndex:667; }","duration":"103.092837ms","start":"2025-01-22T21:27:52.812144Z","end":"2025-01-22T21:27:52.915237Z","steps":["trace[1333166677] 'read index received'  (duration: 67.127371ms)","trace[1333166677] 'applied index is now lower than readState.Index'  (duration: 35.964626ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-22T21:27:52.915377Z","caller":"traceutil/trace.go:171","msg":"trace[269991146] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"168.557231ms","start":"2025-01-22T21:27:52.746812Z","end":"2025-01-22T21:27:52.915369Z","steps":["trace[269991146] 'process raft request'  (duration: 168.262902ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T21:27:52.915523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.359997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:27:52.915550Z","caller":"traceutil/trace.go:171","msg":"trace[919478017] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:635; }","duration":"103.42292ms","start":"2025-01-22T21:27:52.812118Z","end":"2025-01-22T21:27:52.915541Z","steps":["trace[919478017] 'agreement among raft nodes before linearized reading'  (duration: 103.361388ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T21:27:53.356883Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.465204ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:27:53.356984Z","caller":"traceutil/trace.go:171","msg":"trace[1662972192] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:637; }","duration":"146.603889ms","start":"2025-01-22T21:27:53.210362Z","end":"2025-01-22T21:27:53.356966Z","steps":["trace[1662972192] 'range keys from in-memory index tree'  (duration: 146.380448ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:35:57.947088Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":836}
	{"level":"info","ts":"2025-01-22T21:35:57.993548Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":836,"took":"46.02785ms","hash":2539013091,"current-db-size-bytes":2785280,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2785280,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2025-01-22T21:35:57.993631Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2539013091,"revision":836,"compact-revision":-1}
	{"level":"info","ts":"2025-01-22T21:40:57.955547Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1088}
	{"level":"info","ts":"2025-01-22T21:40:57.960287Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1088,"took":"4.206293ms","hash":3688108043,"current-db-size-bytes":2785280,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1736704,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-22T21:40:57.960377Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3688108043,"revision":1088,"compact-revision":836}
	{"level":"info","ts":"2025-01-22T21:45:57.963170Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1340}
	{"level":"info","ts":"2025-01-22T21:45:57.968104Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1340,"took":"3.846325ms","hash":2268005308,"current-db-size-bytes":2785280,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1761280,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-22T21:45:57.968220Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2268005308,"revision":1340,"compact-revision":1088}
	
	
	==> kernel <==
	 21:47:22 up 26 min,  0 users,  load average: 0.16, 0.25, 0.25
	Linux no-preload-806477 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8a3ca7b513ef3ddd2627a130eeeca7fed371a5b9ea66930536dce0cce7d6b7c6] <==
	I0122 21:44:00.785845       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0122 21:44:00.785950       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0122 21:45:59.785602       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:45:59.786133       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0122 21:46:00.788316       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:46:00.788564       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0122 21:46:00.788359       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:46:00.788698       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0122 21:46:00.789846       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0122 21:46:00.789937       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0122 21:47:00.790827       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:47:00.790937       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0122 21:47:00.791208       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:47:00.791426       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0122 21:47:00.792124       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0122 21:47:00.793274       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b174a446de30ecae09e581c181023dc1ae87ff35d35d30f1b59db2fb8a67e7e0] <==
	W0122 21:25:47.917335       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:47.989381       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:48.112372       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:48.121110       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:48.125641       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:48.302894       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:48.411081       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:48.497325       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:51.567640       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:51.802122       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:51.854359       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:51.906082       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:51.912933       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:52.118260       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:52.141938       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:52.206475       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:52.266137       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:52.269109       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:52.415227       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:52.454371       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:52.587411       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:52.612409       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:52.691457       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:52.733360       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:25:52.783693       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [bc3b269f19d485671183d91b45632cbb1e3d1a30e05b52b9fd5ff7ee391a5279] <==
	I0122 21:42:19.548489       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="123.05µs"
	I0122 21:42:19.750159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="252.142µs"
	I0122 21:42:28.832233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="721.985µs"
	E0122 21:42:37.735069       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:42:37.803938       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:43:07.742418       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:43:07.814399       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:43:37.749440       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:43:37.822763       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:44:07.758623       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:44:07.831917       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:44:37.765615       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:44:37.843147       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0122 21:45:04.290925       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-806477"
	E0122 21:45:07.772458       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:45:07.852124       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:45:37.781654       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:45:37.861960       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:46:07.788845       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:46:07.871079       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:46:37.796646       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:46:37.880807       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:47:07.805472       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:47:07.891806       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0122 21:47:12.752977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="261.213µs"
	
	
	==> kube-proxy [28ae3ec7491f9782d09da039feaf850ba192127bcd30f828b052b1e9f70fb947] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0122 21:26:10.101552       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0122 21:26:10.139214       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	E0122 21:26:10.139303       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0122 21:26:10.470088       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0122 21:26:10.470149       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0122 21:26:10.470183       1 server_linux.go:170] "Using iptables Proxier"
	I0122 21:26:10.523892       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0122 21:26:10.525384       1 server.go:497] "Version info" version="v1.32.1"
	I0122 21:26:10.525400       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0122 21:26:10.527588       1 config.go:199] "Starting service config controller"
	I0122 21:26:10.527737       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0122 21:26:10.527864       1 config.go:105] "Starting endpoint slice config controller"
	I0122 21:26:10.527929       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0122 21:26:10.543565       1 config.go:329] "Starting node config controller"
	I0122 21:26:10.544504       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0122 21:26:10.630174       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0122 21:26:10.630223       1 shared_informer.go:320] Caches are synced for service config
	I0122 21:26:10.647228       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b5a8f9cb59c37f82e9b5d2415644431f5199762a1a6621b94c7057bcfbb80916] <==
	W0122 21:26:00.654902       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0122 21:26:00.654969       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:00.685298       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0122 21:26:00.685373       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:00.759228       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0122 21:26:00.759295       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:00.760576       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0122 21:26:00.760679       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:00.790932       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0122 21:26:00.791283       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:00.793876       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0122 21:26:00.793939       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:00.950478       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0122 21:26:00.950543       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:01.051317       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0122 21:26:01.051377       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0122 21:26:01.103459       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0122 21:26:01.103517       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:01.136876       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0122 21:26:01.136935       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:01.238252       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0122 21:26:01.238328       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:01.285245       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0122 21:26:01.285349       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0122 21:26:03.524294       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 22 21:46:43 no-preload-806477 kubelet[3505]: E0122 21:46:43.222764    3505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582403222364069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:46:43 no-preload-806477 kubelet[3505]: E0122 21:46:43.223274    3505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582403222364069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:46:45 no-preload-806477 kubelet[3505]: E0122 21:46:45.728862    3505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-wnc4r" podUID="0c5809fa-0fa9-4635-bc21-3dc0e9ea6e74"
	Jan 22 21:46:47 no-preload-806477 kubelet[3505]: I0122 21:46:47.727604    3505 scope.go:117] "RemoveContainer" containerID="90174c5766140563b56c1a7c41b6f2a5c95774d20328b85059ad7ab5a71d57d3"
	Jan 22 21:46:47 no-preload-806477 kubelet[3505]: E0122 21:46:47.728093    3505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sq7fp_kubernetes-dashboard(4a3b2038-23f8-46b3-9e9f-fa0ccca814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sq7fp" podUID="4a3b2038-23f8-46b3-9e9f-fa0ccca814f1"
	Jan 22 21:46:53 no-preload-806477 kubelet[3505]: E0122 21:46:53.225751    3505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582413225399455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:46:53 no-preload-806477 kubelet[3505]: E0122 21:46:53.225806    3505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582413225399455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:46:57 no-preload-806477 kubelet[3505]: E0122 21:46:57.748292    3505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 22 21:46:57 no-preload-806477 kubelet[3505]: E0122 21:46:57.748718    3505 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 22 21:46:57 no-preload-806477 kubelet[3505]: E0122 21:46:57.750303    3505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75vx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-wnc4r_kube-system(0c5809fa-0fa9-4635-bc21-3dc0e9ea6e74): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 22 21:46:57 no-preload-806477 kubelet[3505]: E0122 21:46:57.751557    3505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-wnc4r" podUID="0c5809fa-0fa9-4635-bc21-3dc0e9ea6e74"
	Jan 22 21:47:01 no-preload-806477 kubelet[3505]: I0122 21:47:01.727979    3505 scope.go:117] "RemoveContainer" containerID="90174c5766140563b56c1a7c41b6f2a5c95774d20328b85059ad7ab5a71d57d3"
	Jan 22 21:47:01 no-preload-806477 kubelet[3505]: E0122 21:47:01.728828    3505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sq7fp_kubernetes-dashboard(4a3b2038-23f8-46b3-9e9f-fa0ccca814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sq7fp" podUID="4a3b2038-23f8-46b3-9e9f-fa0ccca814f1"
	Jan 22 21:47:02 no-preload-806477 kubelet[3505]: E0122 21:47:02.750544    3505 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 22 21:47:02 no-preload-806477 kubelet[3505]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 22 21:47:02 no-preload-806477 kubelet[3505]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 21:47:02 no-preload-806477 kubelet[3505]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 21:47:02 no-preload-806477 kubelet[3505]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 21:47:03 no-preload-806477 kubelet[3505]: E0122 21:47:03.228968    3505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582423228326491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:47:03 no-preload-806477 kubelet[3505]: E0122 21:47:03.229051    3505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582423228326491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:47:12 no-preload-806477 kubelet[3505]: E0122 21:47:12.730401    3505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-wnc4r" podUID="0c5809fa-0fa9-4635-bc21-3dc0e9ea6e74"
	Jan 22 21:47:13 no-preload-806477 kubelet[3505]: E0122 21:47:13.231285    3505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582433230714633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:47:13 no-preload-806477 kubelet[3505]: E0122 21:47:13.231392    3505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582433230714633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:47:15 no-preload-806477 kubelet[3505]: I0122 21:47:15.728187    3505 scope.go:117] "RemoveContainer" containerID="90174c5766140563b56c1a7c41b6f2a5c95774d20328b85059ad7ab5a71d57d3"
	Jan 22 21:47:15 no-preload-806477 kubelet[3505]: E0122 21:47:15.729103    3505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sq7fp_kubernetes-dashboard(4a3b2038-23f8-46b3-9e9f-fa0ccca814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sq7fp" podUID="4a3b2038-23f8-46b3-9e9f-fa0ccca814f1"
	
	
	==> kubernetes-dashboard [de73b2a9abdc5bd9cffbee6c0e343ebc031df02be0420a55fc4e10201d77cffb] <==
	2025/01/22 21:35:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:35:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:36:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:36:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:37:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:37:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:38:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:38:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:39:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:39:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:40:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:40:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:41:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:41:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:42:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:42:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:43:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:43:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:44:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:44:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:45:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:45:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:46:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:46:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:47:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [863452ff80df2438acf39867aa600d65da0ea893243277d18415696652b54d51] <==
	I0122 21:26:11.571981       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0122 21:26:11.607372       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0122 21:26:11.607610       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0122 21:26:11.625913       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0122 21:26:11.629224       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36ebc8f5-393c-4d90-9ab9-613f4b1d5cbc", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-806477_e5bccd1a-3ac5-4276-a25e-4e91ee045d1b became leader
	I0122 21:26:11.629302       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-806477_e5bccd1a-3ac5-4276-a25e-4e91ee045d1b!
	I0122 21:26:11.730277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-806477_e5bccd1a-3ac5-4276-a25e-4e91ee045d1b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-806477 -n no-preload-806477
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-806477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-wnc4r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-806477 describe pod metrics-server-f79f97bbb-wnc4r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-806477 describe pod metrics-server-f79f97bbb-wnc4r: exit status 1 (73.330929ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-wnc4r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-806477 describe pod metrics-server-f79f97bbb-wnc4r: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1620.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-181389 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-181389 create -f testdata/busybox.yaml: exit status 1 (55.54146ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-181389" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-181389 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 6 (270.198465ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0122 21:21:26.458311  311826 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-181389" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-181389" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 6 (303.046919ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0122 21:21:26.759969  311862 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-181389" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-181389" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (86.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-181389 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0122 21:21:33.036837  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-181389 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m26.562096846s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-181389 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-181389 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-181389 describe deploy/metrics-server -n kube-system: exit status 1 (53.217024ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-181389" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-181389 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 6 (250.268388ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0122 21:22:53.634333  312559 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-181389" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-181389" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (86.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1592.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-991469 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0122 21:21:51.117359  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:51.123864  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:51.135476  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:51.157069  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:51.198567  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:51.280093  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:51.441713  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:51.763416  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:52.405545  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:53.686907  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:56.248779  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:01.370811  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:04.885356  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:04.891820  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:04.903332  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:04.924855  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:04.966469  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:05.048745  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:05.210543  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:05.532171  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:06.174404  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:07.456199  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:10.018309  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:11.612984  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:15.139713  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:18.344221  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:25.381852  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:26.087009  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:26.093510  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:26.104982  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:26.126469  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:26.168464  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:26.250057  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:26.412252  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:26.734032  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:27.376303  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:28.657869  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:31.219844  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:31.706373  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:32.095231  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:36.341597  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:45.863467  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:46.583573  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:47.117628  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-991469 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (26m30.171731232s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-991469] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "default-k8s-diff-port-991469" primary control-plane node in "default-k8s-diff-port-991469" cluster
	* Restarting existing kvm2 VM for "default-k8s-diff-port-991469" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-991469 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 21:21:34.811097  312064 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:21:34.811232  312064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:21:34.811244  312064 out.go:358] Setting ErrFile to fd 2...
	I0122 21:21:34.811250  312064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:21:34.811453  312064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:21:34.812052  312064 out.go:352] Setting JSON to false
	I0122 21:21:34.813205  312064 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":14641,"bootTime":1737566254,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:21:34.813345  312064 start.go:139] virtualization: kvm guest
	I0122 21:21:34.815666  312064 out.go:177] * [default-k8s-diff-port-991469] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:21:34.817549  312064 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:21:34.817537  312064 notify.go:220] Checking for updates...
	I0122 21:21:34.820471  312064 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:21:34.821970  312064 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:21:34.823401  312064 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:21:34.824949  312064 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:21:34.826431  312064 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:21:34.828283  312064 config.go:182] Loaded profile config "default-k8s-diff-port-991469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:21:34.828791  312064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:21:34.828894  312064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:21:34.846506  312064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41551
	I0122 21:21:34.847062  312064 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:21:34.847751  312064 main.go:141] libmachine: Using API Version  1
	I0122 21:21:34.847782  312064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:21:34.848269  312064 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:21:34.848499  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:21:34.848797  312064 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:21:34.849157  312064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:21:34.849223  312064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:21:34.866908  312064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46665
	I0122 21:21:34.867462  312064 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:21:34.868130  312064 main.go:141] libmachine: Using API Version  1
	I0122 21:21:34.868171  312064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:21:34.868550  312064 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:21:34.868774  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:21:34.909584  312064 out.go:177] * Using the kvm2 driver based on existing profile
	I0122 21:21:34.910917  312064 start.go:297] selected driver: kvm2
	I0122 21:21:34.910944  312064 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-991469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-991469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.98 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:21:34.911129  312064 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:21:34.912194  312064 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:21:34.912291  312064 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:21:34.929724  312064 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:21:34.930434  312064 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:21:34.930508  312064 cni.go:84] Creating CNI manager for ""
	I0122 21:21:34.930582  312064 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:21:34.930642  312064 start.go:340] cluster config:
	{Name:default-k8s-diff-port-991469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.98 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:21:34.930836  312064 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:21:34.933891  312064 out.go:177] * Starting "default-k8s-diff-port-991469" primary control-plane node in "default-k8s-diff-port-991469" cluster
	I0122 21:21:34.935423  312064 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:21:34.935527  312064 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0122 21:21:34.935540  312064 cache.go:56] Caching tarball of preloaded images
	I0122 21:21:34.935684  312064 preload.go:172] Found /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0122 21:21:34.935699  312064 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0122 21:21:34.935847  312064 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/default-k8s-diff-port-991469/config.json ...
	I0122 21:21:34.936129  312064 start.go:360] acquireMachinesLock for default-k8s-diff-port-991469: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:21:34.936200  312064 start.go:364] duration metric: took 37.462µs to acquireMachinesLock for "default-k8s-diff-port-991469"
	I0122 21:21:34.936221  312064 start.go:96] Skipping create...Using existing machine configuration
	I0122 21:21:34.936228  312064 fix.go:54] fixHost starting: 
	I0122 21:21:34.936623  312064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:21:34.936680  312064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:21:34.954764  312064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0122 21:21:34.955306  312064 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:21:34.955954  312064 main.go:141] libmachine: Using API Version  1
	I0122 21:21:34.955989  312064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:21:34.956387  312064 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:21:34.956639  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:21:34.956841  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetState
	I0122 21:21:34.959069  312064 fix.go:112] recreateIfNeeded on default-k8s-diff-port-991469: state=Stopped err=<nil>
	I0122 21:21:34.959127  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	W0122 21:21:34.959324  312064 fix.go:138] unexpected machine state, will restart: <nil>
	I0122 21:21:34.961475  312064 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-991469" ...
	I0122 21:21:34.963034  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .Start
	I0122 21:21:34.963387  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) starting domain...
	I0122 21:21:34.963404  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) ensuring networks are active...
	I0122 21:21:34.964526  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Ensuring network default is active
	I0122 21:21:34.964957  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Ensuring network mk-default-k8s-diff-port-991469 is active
	I0122 21:21:34.965364  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) getting domain XML...
	I0122 21:21:34.966353  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) creating domain...
	I0122 21:21:36.496272  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) waiting for IP...
	I0122 21:21:36.497595  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:36.498344  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:36.498421  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:36.498320  312099 retry.go:31] will retry after 207.010146ms: waiting for domain to come up
	I0122 21:21:36.707178  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:36.707934  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:36.707978  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:36.707860  312099 retry.go:31] will retry after 321.189302ms: waiting for domain to come up
	I0122 21:21:37.030784  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:37.031524  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:37.031553  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:37.031463  312099 retry.go:31] will retry after 361.50153ms: waiting for domain to come up
	I0122 21:21:37.395299  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:37.396046  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:37.396086  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:37.395973  312099 retry.go:31] will retry after 536.341171ms: waiting for domain to come up
	I0122 21:21:37.933997  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:37.934711  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:37.934743  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:37.934665  312099 retry.go:31] will retry after 562.128607ms: waiting for domain to come up
	I0122 21:21:38.498661  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:38.499438  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:38.499467  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:38.499358  312099 retry.go:31] will retry after 782.423031ms: waiting for domain to come up
	I0122 21:21:39.283617  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:39.284367  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:39.284405  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:39.284338  312099 retry.go:31] will retry after 1.032334805s: waiting for domain to come up
	I0122 21:21:40.318133  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:40.318862  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:40.318901  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:40.318828  312099 retry.go:31] will retry after 1.313364963s: waiting for domain to come up
	I0122 21:21:41.634304  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:41.634824  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:41.634850  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:41.634781  312099 retry.go:31] will retry after 1.846748758s: waiting for domain to come up
	I0122 21:21:43.483765  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:43.484339  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:43.484409  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:43.484273  312099 retry.go:31] will retry after 1.520131084s: waiting for domain to come up
	I0122 21:21:45.006241  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:45.007029  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:45.007059  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:45.006938  312099 retry.go:31] will retry after 1.979658326s: waiting for domain to come up
	I0122 21:21:46.988215  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:46.988853  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:46.988941  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:46.988834  312099 retry.go:31] will retry after 2.23908918s: waiting for domain to come up
	I0122 21:21:49.230152  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:49.230683  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:49.230718  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:49.230613  312099 retry.go:31] will retry after 3.4083592s: waiting for domain to come up
	I0122 21:21:52.640270  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:52.640831  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | unable to find current IP address of domain default-k8s-diff-port-991469 in network mk-default-k8s-diff-port-991469
	I0122 21:21:52.640852  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | I0122 21:21:52.640769  312099 retry.go:31] will retry after 3.596758049s: waiting for domain to come up
	I0122 21:21:56.239261  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.239850  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has current primary IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.239878  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) found domain IP: 192.168.61.98
	I0122 21:21:56.239892  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) reserving static IP address...
	I0122 21:21:56.240356  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-991469", mac: "52:54:00:39:fa:b7", ip: "192.168.61.98"} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:56.240391  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | skip adding static IP to network mk-default-k8s-diff-port-991469 - found existing host DHCP lease matching {name: "default-k8s-diff-port-991469", mac: "52:54:00:39:fa:b7", ip: "192.168.61.98"}
	I0122 21:21:56.240415  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) reserved static IP address 192.168.61.98 for domain default-k8s-diff-port-991469
	I0122 21:21:56.240431  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) waiting for SSH...
	I0122 21:21:56.240444  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | Getting to WaitForSSH function...
	I0122 21:21:56.242672  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.243135  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:56.243183  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.243336  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | Using SSH client type: external
	I0122 21:21:56.243370  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/default-k8s-diff-port-991469/id_rsa (-rw-------)
	I0122 21:21:56.243405  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/default-k8s-diff-port-991469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:21:56.243418  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | About to run SSH command:
	I0122 21:21:56.243431  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | exit 0
	I0122 21:21:56.371090  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | SSH cmd err, output: <nil>: 
	I0122 21:21:56.371576  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetConfigRaw
	I0122 21:21:56.372416  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetIP
	I0122 21:21:56.375683  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.376096  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:56.376131  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.376471  312064 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/default-k8s-diff-port-991469/config.json ...
	I0122 21:21:56.376755  312064 machine.go:93] provisionDockerMachine start ...
	I0122 21:21:56.376782  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:21:56.377107  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:21:56.379968  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.380374  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:56.380410  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.380559  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:21:56.380802  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:56.381009  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:56.381137  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:21:56.381321  312064 main.go:141] libmachine: Using SSH client type: native
	I0122 21:21:56.381534  312064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.98 22 <nil> <nil>}
	I0122 21:21:56.381548  312064 main.go:141] libmachine: About to run SSH command:
	hostname
	I0122 21:21:56.491199  312064 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0122 21:21:56.491237  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetMachineName
	I0122 21:21:56.491505  312064 buildroot.go:166] provisioning hostname "default-k8s-diff-port-991469"
	I0122 21:21:56.491540  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetMachineName
	I0122 21:21:56.491738  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:21:56.494965  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.495284  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:56.495312  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.495581  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:21:56.495832  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:56.496036  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:56.496186  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:21:56.496360  312064 main.go:141] libmachine: Using SSH client type: native
	I0122 21:21:56.496608  312064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.98 22 <nil> <nil>}
	I0122 21:21:56.496629  312064 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-991469 && echo "default-k8s-diff-port-991469" | sudo tee /etc/hostname
	I0122 21:21:56.634387  312064 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-991469
	
	I0122 21:21:56.634433  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:21:56.637890  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.638386  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:56.638420  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.638797  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:21:56.639026  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:56.639248  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:56.639409  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:21:56.639637  312064 main.go:141] libmachine: Using SSH client type: native
	I0122 21:21:56.639954  312064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.98 22 <nil> <nil>}
	I0122 21:21:56.639986  312064 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-991469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-991469/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-991469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:21:56.769822  312064 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:21:56.769852  312064 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:21:56.769877  312064 buildroot.go:174] setting up certificates
	I0122 21:21:56.769890  312064 provision.go:84] configureAuth start
	I0122 21:21:56.769904  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetMachineName
	I0122 21:21:56.770270  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetIP
	I0122 21:21:56.773292  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.773685  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:56.773736  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.773908  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:21:56.777077  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.777514  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:56.777550  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.777828  312064 provision.go:143] copyHostCerts
	I0122 21:21:56.777906  312064 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:21:56.777931  312064 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:21:56.778018  312064 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:21:56.778176  312064 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:21:56.778215  312064 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:21:56.778259  312064 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:21:56.778350  312064 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:21:56.778363  312064 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:21:56.778395  312064 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:21:56.778549  312064 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-991469 san=[127.0.0.1 192.168.61.98 default-k8s-diff-port-991469 localhost minikube]
	I0122 21:21:56.946352  312064 provision.go:177] copyRemoteCerts
	I0122 21:21:56.946441  312064 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:21:56.946479  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:21:56.949539  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.949932  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:56.949974  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:56.950202  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:21:56.950416  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:56.950628  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:21:56.950801  312064 sshutil.go:53] new ssh client: &{IP:192.168.61.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/default-k8s-diff-port-991469/id_rsa Username:docker}
	I0122 21:21:57.040320  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:21:57.075685  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0122 21:21:57.110817  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 21:21:57.143244  312064 provision.go:87] duration metric: took 373.3375ms to configureAuth
	I0122 21:21:57.143278  312064 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:21:57.143507  312064 config.go:182] Loaded profile config "default-k8s-diff-port-991469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:21:57.143611  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:21:57.146981  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:57.147429  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:57.147462  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:57.147699  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:21:57.147953  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:57.148164  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:57.148326  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:21:57.148577  312064 main.go:141] libmachine: Using SSH client type: native
	I0122 21:21:57.148855  312064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.98 22 <nil> <nil>}
	I0122 21:21:57.148882  312064 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:21:57.418449  312064 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:21:57.418484  312064 machine.go:96] duration metric: took 1.041712184s to provisionDockerMachine
	I0122 21:21:57.418503  312064 start.go:293] postStartSetup for "default-k8s-diff-port-991469" (driver="kvm2")
	I0122 21:21:57.418520  312064 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:21:57.418549  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:21:57.418943  312064 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:21:57.418987  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:21:57.422402  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:57.422804  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:57.422837  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:57.423104  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:21:57.423348  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:57.423543  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:21:57.423784  312064 sshutil.go:53] new ssh client: &{IP:192.168.61.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/default-k8s-diff-port-991469/id_rsa Username:docker}
	I0122 21:21:57.513202  312064 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:21:57.518710  312064 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:21:57.518752  312064 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:21:57.518839  312064 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:21:57.518961  312064 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:21:57.519097  312064 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:21:57.536252  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:21:57.577274  312064 start.go:296] duration metric: took 158.751584ms for postStartSetup
	I0122 21:21:57.577323  312064 fix.go:56] duration metric: took 22.641095309s for fixHost
	I0122 21:21:57.577350  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:21:57.580891  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:57.581291  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:57.581322  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:57.581750  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:21:57.582000  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:57.582238  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:57.582401  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:21:57.582604  312064 main.go:141] libmachine: Using SSH client type: native
	I0122 21:21:57.582800  312064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.98 22 <nil> <nil>}
	I0122 21:21:57.582848  312064 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:21:57.696900  312064 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737580917.647209765
	
	I0122 21:21:57.696935  312064 fix.go:216] guest clock: 1737580917.647209765
	I0122 21:21:57.696946  312064 fix.go:229] Guest: 2025-01-22 21:21:57.647209765 +0000 UTC Remote: 2025-01-22 21:21:57.577327626 +0000 UTC m=+22.810038070 (delta=69.882139ms)
	I0122 21:21:57.696976  312064 fix.go:200] guest clock delta is within tolerance: 69.882139ms
	I0122 21:21:57.696983  312064 start.go:83] releasing machines lock for "default-k8s-diff-port-991469", held for 22.760769864s
	I0122 21:21:57.697010  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:21:57.697322  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetIP
	I0122 21:21:57.700480  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:57.700929  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:57.700969  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:57.701201  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:21:57.701832  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:21:57.702068  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:21:57.702206  312064 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:21:57.702254  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:21:57.702400  312064 ssh_runner.go:195] Run: cat /version.json
	I0122 21:21:57.702447  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:21:57.705468  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:57.705666  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:57.705904  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:57.705934  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:57.706146  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:21:57.706214  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:57.706263  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:57.706424  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:57.706425  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:21:57.706613  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:21:57.706623  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:21:57.706791  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:21:57.706801  312064 sshutil.go:53] new ssh client: &{IP:192.168.61.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/default-k8s-diff-port-991469/id_rsa Username:docker}
	I0122 21:21:57.706947  312064 sshutil.go:53] new ssh client: &{IP:192.168.61.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/default-k8s-diff-port-991469/id_rsa Username:docker}
	I0122 21:21:57.814522  312064 ssh_runner.go:195] Run: systemctl --version
	I0122 21:21:57.822115  312064 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:21:57.978576  312064 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:21:57.987469  312064 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:21:57.987561  312064 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:21:58.013786  312064 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:21:58.013827  312064 start.go:495] detecting cgroup driver to use...
	I0122 21:21:58.013960  312064 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:21:58.037259  312064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:21:58.056417  312064 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:21:58.056488  312064 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:21:58.073475  312064 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:21:58.091050  312064 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:21:58.260530  312064 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:21:58.449900  312064 docker.go:233] disabling docker service ...
	I0122 21:21:58.449992  312064 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:21:58.468524  312064 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:21:58.488404  312064 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:21:58.642368  312064 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:21:58.802589  312064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:21:58.819956  312064 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:21:58.842961  312064 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0122 21:21:58.843055  312064 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:21:58.857888  312064 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:21:58.857993  312064 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:21:58.872471  312064 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:21:58.888077  312064 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:21:58.903990  312064 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:21:58.918833  312064 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:21:58.932210  312064 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:21:58.955468  312064 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:21:58.968773  312064 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:21:58.981274  312064 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:21:58.981356  312064 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:21:58.998613  312064 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:21:59.012388  312064 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:21:59.178587  312064 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:21:59.303501  312064 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:21:59.303598  312064 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:21:59.310066  312064 start.go:563] Will wait 60s for crictl version
	I0122 21:21:59.310160  312064 ssh_runner.go:195] Run: which crictl
	I0122 21:21:59.315139  312064 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:21:59.364372  312064 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:21:59.364475  312064 ssh_runner.go:195] Run: crio --version
	I0122 21:21:59.398039  312064 ssh_runner.go:195] Run: crio --version
	I0122 21:21:59.436671  312064 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0122 21:21:59.438130  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetIP
	I0122 21:21:59.441438  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:59.441792  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:21:59.441825  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:21:59.442089  312064 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0122 21:21:59.447304  312064 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:21:59.462305  312064 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-991469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991
469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.98 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:21:59.462434  312064 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:21:59.462476  312064 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:21:59.505624  312064 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0122 21:21:59.505732  312064 ssh_runner.go:195] Run: which lz4
	I0122 21:21:59.511084  312064 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:21:59.518233  312064 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:21:59.518290  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0122 21:22:01.143129  312064 crio.go:462] duration metric: took 1.632049666s to copy over tarball
	I0122 21:22:01.143252  312064 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:22:03.532080  312064 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.388787884s)
	I0122 21:22:03.532120  312064 crio.go:469] duration metric: took 2.388945504s to extract the tarball
	I0122 21:22:03.532130  312064 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:22:03.571821  312064 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:22:03.633304  312064 crio.go:514] all images are preloaded for cri-o runtime.
	I0122 21:22:03.633331  312064 cache_images.go:84] Images are preloaded, skipping loading
	I0122 21:22:03.633341  312064 kubeadm.go:934] updating node { 192.168.61.98 8444 v1.32.1 crio true true} ...
	I0122 21:22:03.633486  312064 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-991469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:22:03.633573  312064 ssh_runner.go:195] Run: crio config
	I0122 21:22:03.684782  312064 cni.go:84] Creating CNI manager for ""
	I0122 21:22:03.684817  312064 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:22:03.684835  312064 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 21:22:03.684877  312064 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.98 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-991469 NodeName:default-k8s-diff-port-991469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:22:03.685074  312064 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.98
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-991469"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.98"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.98"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:22:03.685147  312064 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:22:03.696514  312064 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:22:03.696587  312064 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:22:03.707737  312064 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0122 21:22:03.728083  312064 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:22:03.747606  312064 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0122 21:22:03.767601  312064 ssh_runner.go:195] Run: grep 192.168.61.98	control-plane.minikube.internal$ /etc/hosts
	I0122 21:22:03.772569  312064 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:22:03.787217  312064 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:22:03.921892  312064 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:22:03.942126  312064 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/default-k8s-diff-port-991469 for IP: 192.168.61.98
	I0122 21:22:03.942163  312064 certs.go:194] generating shared ca certs ...
	I0122 21:22:03.942203  312064 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:22:03.942455  312064 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:22:03.942516  312064 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:22:03.942529  312064 certs.go:256] generating profile certs ...
	I0122 21:22:03.942653  312064 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/default-k8s-diff-port-991469/client.key
	I0122 21:22:03.942744  312064 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/default-k8s-diff-port-991469/apiserver.key.f0415887
	I0122 21:22:03.942795  312064 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/default-k8s-diff-port-991469/proxy-client.key
	I0122 21:22:03.942981  312064 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:22:03.943035  312064 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:22:03.943051  312064 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:22:03.943091  312064 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:22:03.943126  312064 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:22:03.943162  312064 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:22:03.943226  312064 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:22:03.943981  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:22:03.991974  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:22:04.031964  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:22:04.074021  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:22:04.113404  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/default-k8s-diff-port-991469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0122 21:22:04.158258  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/default-k8s-diff-port-991469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:22:04.193424  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/default-k8s-diff-port-991469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:22:04.223208  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/default-k8s-diff-port-991469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0122 21:22:04.252526  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:22:04.282038  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:22:04.312104  312064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:22:04.341966  312064 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:22:04.363832  312064 ssh_runner.go:195] Run: openssl version
	I0122 21:22:04.373424  312064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:22:04.388273  312064 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:22:04.394102  312064 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:22:04.394192  312064 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:22:04.401260  312064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:22:04.415888  312064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:22:04.429932  312064 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:22:04.437515  312064 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:22:04.437599  312064 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:22:04.444770  312064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:22:04.459650  312064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:22:04.474493  312064 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:22:04.480759  312064 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:22:04.480838  312064 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:22:04.487707  312064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:22:04.501826  312064 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:22:04.508360  312064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 21:22:04.515732  312064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 21:22:04.523191  312064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 21:22:04.531198  312064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 21:22:04.539162  312064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 21:22:04.546451  312064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 21:22:04.554167  312064 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-991469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991469
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.98 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:22:04.554351  312064 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:22:04.554460  312064 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:22:04.608342  312064 cri.go:89] found id: ""
	I0122 21:22:04.608439  312064 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:22:04.620759  312064 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0122 21:22:04.620785  312064 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0122 21:22:04.620846  312064 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0122 21:22:04.634794  312064 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0122 21:22:04.636045  312064 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-991469" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:22:04.636882  312064 kubeconfig.go:62] /home/jenkins/minikube-integration/20288-247142/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-991469" cluster setting kubeconfig missing "default-k8s-diff-port-991469" context setting]
	I0122 21:22:04.638039  312064 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:22:04.654006  312064 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0122 21:22:04.668761  312064 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.98
	I0122 21:22:04.668806  312064 kubeadm.go:1160] stopping kube-system containers ...
	I0122 21:22:04.668823  312064 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0122 21:22:04.668903  312064 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:22:04.716072  312064 cri.go:89] found id: ""
	I0122 21:22:04.716170  312064 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0122 21:22:04.737570  312064 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:22:04.750370  312064 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:22:04.750395  312064 kubeadm.go:157] found existing configuration files:
	
	I0122 21:22:04.750463  312064 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0122 21:22:04.763139  312064 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:22:04.763224  312064 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:22:04.777718  312064 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0122 21:22:04.791134  312064 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:22:04.791211  312064 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:22:04.803417  312064 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0122 21:22:04.814854  312064 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:22:04.814922  312064 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:22:04.826997  312064 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0122 21:22:04.838744  312064 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:22:04.838827  312064 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:22:04.851213  312064 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:22:04.863865  312064 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:22:05.007351  312064 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:22:06.201586  312064 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.194192964s)
	I0122 21:22:06.201636  312064 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:22:06.521189  312064 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:22:06.593131  312064 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:22:06.710542  312064 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:22:06.710646  312064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:22:07.211485  312064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:22:07.711055  312064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:22:07.740563  312064 api_server.go:72] duration metric: took 1.030028107s to wait for apiserver process to appear ...
	I0122 21:22:07.740594  312064 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:22:07.740617  312064 api_server.go:253] Checking apiserver healthz at https://192.168.61.98:8444/healthz ...
	I0122 21:22:07.741271  312064 api_server.go:269] stopped: https://192.168.61.98:8444/healthz: Get "https://192.168.61.98:8444/healthz": dial tcp 192.168.61.98:8444: connect: connection refused
	I0122 21:22:08.240790  312064 api_server.go:253] Checking apiserver healthz at https://192.168.61.98:8444/healthz ...
	I0122 21:22:10.705534  312064 api_server.go:279] https://192.168.61.98:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:22:10.705572  312064 api_server.go:103] status: https://192.168.61.98:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:22:10.705592  312064 api_server.go:253] Checking apiserver healthz at https://192.168.61.98:8444/healthz ...
	I0122 21:22:10.742015  312064 api_server.go:279] https://192.168.61.98:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:22:10.742050  312064 api_server.go:103] status: https://192.168.61.98:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:22:10.742069  312064 api_server.go:253] Checking apiserver healthz at https://192.168.61.98:8444/healthz ...
	I0122 21:22:10.811041  312064 api_server.go:279] https://192.168.61.98:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:22:10.811077  312064 api_server.go:103] status: https://192.168.61.98:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:22:11.241402  312064 api_server.go:253] Checking apiserver healthz at https://192.168.61.98:8444/healthz ...
	I0122 21:22:11.252034  312064 api_server.go:279] https://192.168.61.98:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:22:11.252077  312064 api_server.go:103] status: https://192.168.61.98:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:22:11.740742  312064 api_server.go:253] Checking apiserver healthz at https://192.168.61.98:8444/healthz ...
	I0122 21:22:11.754327  312064 api_server.go:279] https://192.168.61.98:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:22:11.754374  312064 api_server.go:103] status: https://192.168.61.98:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:22:12.241028  312064 api_server.go:253] Checking apiserver healthz at https://192.168.61.98:8444/healthz ...
	I0122 21:22:12.253635  312064 api_server.go:279] https://192.168.61.98:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:22:12.253676  312064 api_server.go:103] status: https://192.168.61.98:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:22:12.741404  312064 api_server.go:253] Checking apiserver healthz at https://192.168.61.98:8444/healthz ...
	I0122 21:22:12.748707  312064 api_server.go:279] https://192.168.61.98:8444/healthz returned 200:
	ok
	I0122 21:22:12.757373  312064 api_server.go:141] control plane version: v1.32.1
	I0122 21:22:12.757412  312064 api_server.go:131] duration metric: took 5.016810721s to wait for apiserver health ...
	I0122 21:22:12.757425  312064 cni.go:84] Creating CNI manager for ""
	I0122 21:22:12.757435  312064 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:22:12.759541  312064 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:22:12.761105  312064 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:22:12.774827  312064 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:22:12.818834  312064 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:22:12.866618  312064 system_pods.go:59] 8 kube-system pods found
	I0122 21:22:12.866689  312064 system_pods.go:61] "coredns-668d6bf9bc-rpbsm" [86839b3f-e37b-47fa-9133-f5dbcc074c0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:22:12.866707  312064 system_pods.go:61] "etcd-default-k8s-diff-port-991469" [7f2aadc1-bf45-404d-95f1-47082e2a156a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:22:12.866720  312064 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991469" [53585d54-0756-43eb-a638-1542357f1268] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:22:12.866732  312064 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991469" [598c1385-be8b-4d09-8489-a5d372d77d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:22:12.866746  312064 system_pods.go:61] "kube-proxy-b52wp" [ee2b5545-0836-4d81-80b2-58c4c138770f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0122 21:22:12.866754  312064 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991469" [ace7cf23-f33f-45cd-ae37-59adf2fc0d0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:22:12.866766  312064 system_pods.go:61] "metrics-server-f79f97bbb-c87wg" [4a950b12-f600-4b71-83b4-2dcf5fd18627] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:22:12.866779  312064 system_pods.go:61] "storage-provisioner" [8886c117-d820-4b11-9c72-f24e0a139c24] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0122 21:22:12.866790  312064 system_pods.go:74] duration metric: took 47.920737ms to wait for pod list to return data ...
	I0122 21:22:12.866805  312064 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:22:12.894746  312064 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:22:12.894784  312064 node_conditions.go:123] node cpu capacity is 2
	I0122 21:22:12.894797  312064 node_conditions.go:105] duration metric: took 27.983477ms to run NodePressure ...
	I0122 21:22:12.894817  312064 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:22:13.311940  312064 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0122 21:22:13.319927  312064 kubeadm.go:739] kubelet initialised
	I0122 21:22:13.319968  312064 kubeadm.go:740] duration metric: took 7.99001ms waiting for restarted kubelet to initialise ...
	I0122 21:22:13.319982  312064 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:22:13.329339  312064 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-rpbsm" in "kube-system" namespace to be "Ready" ...
	I0122 21:22:13.345096  312064 pod_ready.go:98] node "default-k8s-diff-port-991469" hosting pod "coredns-668d6bf9bc-rpbsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991469" has status "Ready":"False"
	I0122 21:22:13.345131  312064 pod_ready.go:82] duration metric: took 15.746448ms for pod "coredns-668d6bf9bc-rpbsm" in "kube-system" namespace to be "Ready" ...
	E0122 21:22:13.345160  312064 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-991469" hosting pod "coredns-668d6bf9bc-rpbsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-991469" has status "Ready":"False"
	I0122 21:22:13.345171  312064 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:22:15.352398  312064 pod_ready.go:103] pod "etcd-default-k8s-diff-port-991469" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:17.353350  312064 pod_ready.go:103] pod "etcd-default-k8s-diff-port-991469" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:19.353477  312064 pod_ready.go:103] pod "etcd-default-k8s-diff-port-991469" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:21.352646  312064 pod_ready.go:93] pod "etcd-default-k8s-diff-port-991469" in "kube-system" namespace has status "Ready":"True"
	I0122 21:22:21.352677  312064 pod_ready.go:82] duration metric: took 8.007495551s for pod "etcd-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:22:21.352687  312064 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:22:23.360127  312064 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-991469" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:24.361178  312064 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-991469" in "kube-system" namespace has status "Ready":"True"
	I0122 21:22:24.361215  312064 pod_ready.go:82] duration metric: took 3.008519836s for pod "kube-apiserver-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:22:24.361227  312064 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:22:24.374403  312064 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-991469" in "kube-system" namespace has status "Ready":"True"
	I0122 21:22:24.374432  312064 pod_ready.go:82] duration metric: took 13.197391ms for pod "kube-controller-manager-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:22:24.374453  312064 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b52wp" in "kube-system" namespace to be "Ready" ...
	I0122 21:22:24.382473  312064 pod_ready.go:93] pod "kube-proxy-b52wp" in "kube-system" namespace has status "Ready":"True"
	I0122 21:22:24.382501  312064 pod_ready.go:82] duration metric: took 8.040605ms for pod "kube-proxy-b52wp" in "kube-system" namespace to be "Ready" ...
	I0122 21:22:24.382511  312064 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:22:24.388579  312064 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-991469" in "kube-system" namespace has status "Ready":"True"
	I0122 21:22:24.388606  312064 pod_ready.go:82] duration metric: took 6.088561ms for pod "kube-scheduler-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:22:24.388616  312064 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace to be "Ready" ...
	I0122 21:22:26.395823  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:28.897280  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:31.396661  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:33.895886  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:35.896444  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:37.896982  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:39.898295  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:42.396526  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:44.896120  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:46.897013  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:48.897625  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:51.395512  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:53.396939  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:55.896931  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:22:57.897498  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:00.396795  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:02.896202  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:04.896550  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:07.396986  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:09.397620  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:11.897506  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:14.396700  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:16.397292  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:18.399103  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:20.896376  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:22.903437  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:25.398491  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:27.896735  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:29.897670  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:31.898102  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:34.397356  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:36.897852  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:39.396971  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:41.900587  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:44.396875  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:46.397818  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:48.896687  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:50.897908  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:53.396002  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:55.396822  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:23:57.897107  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:00.395477  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:02.397022  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:04.397815  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:06.896397  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:08.896937  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:11.396368  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:13.895342  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:15.897813  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:18.396551  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:20.896506  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:23.395209  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:25.396619  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:27.397432  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:29.895893  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:31.897669  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:34.395769  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:36.397393  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:38.897321  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:41.395083  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:43.396469  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:45.896657  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:48.395696  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:50.395809  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:52.396703  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:54.896037  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:57.396005  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:24:59.895282  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:01.897980  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:04.395838  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:06.896194  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:09.395854  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:11.397277  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:13.397331  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:15.397490  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:17.397591  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:19.896036  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:22.396954  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:24.896146  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:26.897579  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:29.395235  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:31.395839  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:33.397582  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:35.895175  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:37.896405  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:39.896707  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:41.899292  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:44.395274  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:46.397000  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:48.898605  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:51.396059  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:53.397288  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:55.397856  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:25:57.897543  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:26:00.397471  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:26:02.897868  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:26:05.396629  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:26:07.897323  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:26:10.398858  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:26:12.897144  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:26:14.898398  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:26:17.397775  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:26:19.897173  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:26:21.899310  312064 pod_ready.go:103] pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace has status "Ready":"False"
	I0122 21:26:24.389110  312064 pod_ready.go:82] duration metric: took 4m0.000477716s for pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace to be "Ready" ...
	E0122 21:26:24.389154  312064 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-c87wg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0122 21:26:24.389176  312064 pod_ready.go:39] duration metric: took 4m11.069181625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:26:24.389208  312064 kubeadm.go:597] duration metric: took 4m19.768417292s to restartPrimaryControlPlane
	W0122 21:26:24.389296  312064 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0122 21:26:24.389334  312064 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:26:52.279687  312064 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.890319832s)
	I0122 21:26:52.279794  312064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:26:52.304815  312064 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:26:52.329295  312064 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:26:52.357250  312064 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:26:52.357282  312064 kubeadm.go:157] found existing configuration files:
	
	I0122 21:26:52.357348  312064 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0122 21:26:52.369564  312064 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:26:52.369651  312064 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:26:52.387841  312064 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0122 21:26:52.405841  312064 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:26:52.405942  312064 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:26:52.429762  312064 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0122 21:26:52.462592  312064 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:26:52.462682  312064 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:26:52.480904  312064 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0122 21:26:52.496073  312064 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:26:52.496159  312064 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:26:52.518415  312064 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:26:52.578093  312064 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0122 21:26:52.578238  312064 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:26:52.737204  312064 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:26:52.737354  312064 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:26:52.737481  312064 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0122 21:26:52.751606  312064 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:26:52.753683  312064 out.go:235]   - Generating certificates and keys ...
	I0122 21:26:52.753817  312064 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:26:52.753903  312064 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:26:52.754017  312064 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:26:52.754103  312064 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:26:52.754226  312064 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:26:52.754301  312064 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:26:52.754379  312064 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:26:52.754460  312064 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:26:52.754559  312064 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:26:52.754654  312064 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:26:52.754706  312064 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:26:52.754782  312064 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:26:53.322505  312064 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:26:53.712147  312064 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0122 21:26:53.934631  312064 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:26:54.082272  312064 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:26:54.222988  312064 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:26:54.223666  312064 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:26:54.226373  312064 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:26:54.228725  312064 out.go:235]   - Booting up control plane ...
	I0122 21:26:54.228922  312064 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:26:54.229066  312064 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:26:54.229175  312064 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:26:54.256243  312064 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:26:54.272979  312064 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:26:54.273251  312064 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:26:54.409063  312064 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0122 21:26:54.409241  312064 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0122 21:26:55.411702  312064 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002630851s
	I0122 21:26:55.411810  312064 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0122 21:27:00.916142  312064 kubeadm.go:310] [api-check] The API server is healthy after 5.502341438s
	I0122 21:27:00.934568  312064 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0122 21:27:00.968799  312064 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0122 21:27:01.015087  312064 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0122 21:27:01.015396  312064 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-991469 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0122 21:27:01.034437  312064 kubeadm.go:310] [bootstrap-token] Using token: n7txdb.zv2x68ko58fs1bg7
	I0122 21:27:01.036112  312064 out.go:235]   - Configuring RBAC rules ...
	I0122 21:27:01.036298  312064 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0122 21:27:01.044398  312064 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0122 21:27:01.059270  312064 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0122 21:27:01.067306  312064 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0122 21:27:01.072344  312064 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0122 21:27:01.077999  312064 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0122 21:27:01.322654  312064 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0122 21:27:01.827398  312064 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0122 21:27:02.322314  312064 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0122 21:27:02.323878  312064 kubeadm.go:310] 
	I0122 21:27:02.323968  312064 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0122 21:27:02.323976  312064 kubeadm.go:310] 
	I0122 21:27:02.324084  312064 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0122 21:27:02.324090  312064 kubeadm.go:310] 
	I0122 21:27:02.324124  312064 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0122 21:27:02.324208  312064 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0122 21:27:02.324282  312064 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0122 21:27:02.324288  312064 kubeadm.go:310] 
	I0122 21:27:02.324364  312064 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0122 21:27:02.324374  312064 kubeadm.go:310] 
	I0122 21:27:02.324429  312064 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0122 21:27:02.324440  312064 kubeadm.go:310] 
	I0122 21:27:02.324509  312064 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0122 21:27:02.324647  312064 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0122 21:27:02.324757  312064 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0122 21:27:02.324767  312064 kubeadm.go:310] 
	I0122 21:27:02.324881  312064 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0122 21:27:02.325021  312064 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0122 21:27:02.325044  312064 kubeadm.go:310] 
	I0122 21:27:02.325193  312064 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token n7txdb.zv2x68ko58fs1bg7 \
	I0122 21:27:02.325327  312064 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e447fe88d4e43aa7dedab9e7f78d5319a1771f66f483469eded588e9e0904b1d \
	I0122 21:27:02.325356  312064 kubeadm.go:310] 	--control-plane 
	I0122 21:27:02.325366  312064 kubeadm.go:310] 
	I0122 21:27:02.325500  312064 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0122 21:27:02.325532  312064 kubeadm.go:310] 
	I0122 21:27:02.325654  312064 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token n7txdb.zv2x68ko58fs1bg7 \
	I0122 21:27:02.325786  312064 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e447fe88d4e43aa7dedab9e7f78d5319a1771f66f483469eded588e9e0904b1d 
	I0122 21:27:02.326754  312064 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:27:02.326988  312064 cni.go:84] Creating CNI manager for ""
	I0122 21:27:02.327005  312064 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:02.328742  312064 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:27:02.330106  312064 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:27:02.344861  312064 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:27:02.369272  312064 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 21:27:02.369426  312064 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:27:02.369557  312064 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-991469 minikube.k8s.io/updated_at=2025_01_22T21_27_02_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4 minikube.k8s.io/name=default-k8s-diff-port-991469 minikube.k8s.io/primary=true
	I0122 21:27:02.769697  312064 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:27:02.769835  312064 ops.go:34] apiserver oom_adj: -16
	I0122 21:27:03.270258  312064 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:27:03.769822  312064 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:27:04.270339  312064 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:27:04.770787  312064 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:27:05.270044  312064 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:27:05.770348  312064 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:27:06.270672  312064 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:27:06.430205  312064 kubeadm.go:1113] duration metric: took 4.060804005s to wait for elevateKubeSystemPrivileges
	I0122 21:27:06.430256  312064 kubeadm.go:394] duration metric: took 5m1.876098946s to StartCluster
	I0122 21:27:06.430285  312064 settings.go:142] acquiring lock: {Name:mkd1753661c2351dd9318eb8eab12d9164b6fe23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:06.430394  312064 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:06.431809  312064 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:06.432326  312064 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.98 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 21:27:06.432489  312064 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0122 21:27:06.432582  312064 config.go:182] Loaded profile config "default-k8s-diff-port-991469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:06.432605  312064 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-991469"
	I0122 21:27:06.432635  312064 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-991469"
	I0122 21:27:06.432641  312064 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-991469"
	I0122 21:27:06.432653  312064 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-991469"
	I0122 21:27:06.432659  312064 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-991469"
	I0122 21:27:06.432667  312064 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-991469"
	W0122 21:27:06.432675  312064 addons.go:247] addon metrics-server should already be in state true
	I0122 21:27:06.432716  312064 host.go:66] Checking if "default-k8s-diff-port-991469" exists ...
	I0122 21:27:06.433079  312064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:06.433102  312064 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-991469"
	I0122 21:27:06.433116  312064 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-991469"
	W0122 21:27:06.433124  312064 addons.go:247] addon dashboard should already be in state true
	I0122 21:27:06.433144  312064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:06.433150  312064 host.go:66] Checking if "default-k8s-diff-port-991469" exists ...
	W0122 21:27:06.432643  312064 addons.go:247] addon storage-provisioner should already be in state true
	I0122 21:27:06.433341  312064 host.go:66] Checking if "default-k8s-diff-port-991469" exists ...
	I0122 21:27:06.433083  312064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:06.433650  312064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:06.433753  312064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:06.433797  312064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:06.433810  312064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:06.433869  312064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:06.434749  312064 out.go:177] * Verifying Kubernetes components...
	I0122 21:27:06.436355  312064 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:06.457196  312064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40529
	I0122 21:27:06.458377  312064 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:06.458516  312064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I0122 21:27:06.458999  312064 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:06.459081  312064 main.go:141] libmachine: Using API Version  1
	I0122 21:27:06.459108  312064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:06.459589  312064 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:06.459670  312064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0122 21:27:06.459911  312064 main.go:141] libmachine: Using API Version  1
	I0122 21:27:06.459940  312064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:06.460324  312064 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:06.460471  312064 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:06.460643  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetState
	I0122 21:27:06.460665  312064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:06.460720  312064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:06.461414  312064 main.go:141] libmachine: Using API Version  1
	I0122 21:27:06.461435  312064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:06.461864  312064 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:06.462540  312064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:06.462593  312064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:06.464574  312064 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-991469"
	W0122 21:27:06.464601  312064 addons.go:247] addon default-storageclass should already be in state true
	I0122 21:27:06.464638  312064 host.go:66] Checking if "default-k8s-diff-port-991469" exists ...
	I0122 21:27:06.465064  312064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:06.465117  312064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:06.465973  312064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36259
	I0122 21:27:06.466471  312064 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:06.467299  312064 main.go:141] libmachine: Using API Version  1
	I0122 21:27:06.467320  312064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:06.467788  312064 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:06.468363  312064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:06.468413  312064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:06.483536  312064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0122 21:27:06.484257  312064 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:06.484916  312064 main.go:141] libmachine: Using API Version  1
	I0122 21:27:06.484944  312064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:06.485592  312064 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:06.485951  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetState
	I0122 21:27:06.488571  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:27:06.489141  312064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0122 21:27:06.489588  312064 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:06.490104  312064 main.go:141] libmachine: Using API Version  1
	I0122 21:27:06.490139  312064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:06.490206  312064 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:27:06.490607  312064 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:06.491327  312064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:06.491386  312064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:06.491663  312064 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:27:06.491687  312064 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 21:27:06.491716  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:27:06.492438  312064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I0122 21:27:06.492897  312064 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:06.493550  312064 main.go:141] libmachine: Using API Version  1
	I0122 21:27:06.493578  312064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:06.494015  312064 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:06.494361  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetState
	I0122 21:27:06.496284  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:27:06.496336  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:27:06.497009  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:27:06.496815  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:27:06.497066  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:27:06.497221  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:27:06.497374  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:27:06.497532  312064 sshutil.go:53] new ssh client: &{IP:192.168.61.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/default-k8s-diff-port-991469/id_rsa Username:docker}
	I0122 21:27:06.498307  312064 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0122 21:27:06.499655  312064 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0122 21:27:06.500964  312064 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0122 21:27:06.500986  312064 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0122 21:27:06.501020  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:27:06.505005  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:27:06.505792  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:27:06.505829  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:27:06.506142  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:27:06.507100  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:27:06.507289  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:27:06.507336  312064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37973
	I0122 21:27:06.507655  312064 sshutil.go:53] new ssh client: &{IP:192.168.61.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/default-k8s-diff-port-991469/id_rsa Username:docker}
	I0122 21:27:06.507919  312064 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:06.508538  312064 main.go:141] libmachine: Using API Version  1
	I0122 21:27:06.508558  312064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:06.509088  312064 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:06.509233  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetState
	I0122 21:27:06.511323  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:27:06.513107  312064 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0122 21:27:06.514434  312064 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0122 21:27:06.514467  312064 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0122 21:27:06.514504  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:27:06.518829  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:27:06.519360  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:27:06.519382  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:27:06.519747  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:27:06.519922  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:27:06.520167  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:27:06.520339  312064 sshutil.go:53] new ssh client: &{IP:192.168.61.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/default-k8s-diff-port-991469/id_rsa Username:docker}
	I0122 21:27:06.520961  312064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0122 21:27:06.521449  312064 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:06.521925  312064 main.go:141] libmachine: Using API Version  1
	I0122 21:27:06.521946  312064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:06.522288  312064 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:06.522593  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetState
	I0122 21:27:06.524383  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .DriverName
	I0122 21:27:06.524639  312064 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 21:27:06.524656  312064 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 21:27:06.524675  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHHostname
	I0122 21:27:06.527815  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:27:06.528283  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fa:b7", ip: ""} in network mk-default-k8s-diff-port-991469: {Iface:virbr3 ExpiryTime:2025-01-22 22:21:48 +0000 UTC Type:0 Mac:52:54:00:39:fa:b7 Iaid: IPaddr:192.168.61.98 Prefix:24 Hostname:default-k8s-diff-port-991469 Clientid:01:52:54:00:39:fa:b7}
	I0122 21:27:06.528366  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | domain default-k8s-diff-port-991469 has defined IP address 192.168.61.98 and MAC address 52:54:00:39:fa:b7 in network mk-default-k8s-diff-port-991469
	I0122 21:27:06.528558  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHPort
	I0122 21:27:06.528816  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHKeyPath
	I0122 21:27:06.529011  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .GetSSHUsername
	I0122 21:27:06.529165  312064 sshutil.go:53] new ssh client: &{IP:192.168.61.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/default-k8s-diff-port-991469/id_rsa Username:docker}
	I0122 21:27:06.706899  312064 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:27:06.729462  312064 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-991469" to be "Ready" ...
	I0122 21:27:06.754697  312064 node_ready.go:49] node "default-k8s-diff-port-991469" has status "Ready":"True"
	I0122 21:27:06.754735  312064 node_ready.go:38] duration metric: took 25.23034ms for node "default-k8s-diff-port-991469" to be "Ready" ...
	I0122 21:27:06.754752  312064 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:27:06.765679  312064 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-28dbf" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:06.874136  312064 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0122 21:27:06.874172  312064 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0122 21:27:06.891469  312064 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:27:06.927490  312064 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0122 21:27:06.927592  312064 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0122 21:27:06.928226  312064 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0122 21:27:06.928248  312064 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0122 21:27:06.957431  312064 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 21:27:06.996192  312064 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0122 21:27:06.996234  312064 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0122 21:27:06.996469  312064 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:27:06.996491  312064 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0122 21:27:07.057899  312064 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:27:07.066538  312064 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0122 21:27:07.066569  312064 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0122 21:27:07.276704  312064 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0122 21:27:07.276742  312064 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0122 21:27:07.392136  312064 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0122 21:27:07.392170  312064 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0122 21:27:07.473906  312064 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0122 21:27:07.473947  312064 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0122 21:27:07.569821  312064 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0122 21:27:07.569859  312064 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0122 21:27:07.725526  312064 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0122 21:27:07.725562  312064 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0122 21:27:07.823115  312064 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:27:07.823153  312064 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0122 21:27:07.871428  312064 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:27:08.155073  312064 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.263545062s)
	I0122 21:27:08.155155  312064 main.go:141] libmachine: Making call to close driver server
	I0122 21:27:08.155152  312064 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.197674543s)
	I0122 21:27:08.155175  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .Close
	I0122 21:27:08.155203  312064 main.go:141] libmachine: Making call to close driver server
	I0122 21:27:08.155262  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .Close
	I0122 21:27:08.155518  312064 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:27:08.155541  312064 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:27:08.155552  312064 main.go:141] libmachine: Making call to close driver server
	I0122 21:27:08.155563  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .Close
	I0122 21:27:08.155705  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | Closing plugin on server side
	I0122 21:27:08.155755  312064 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:27:08.155812  312064 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:27:08.155828  312064 main.go:141] libmachine: Making call to close driver server
	I0122 21:27:08.155837  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .Close
	I0122 21:27:08.155861  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | Closing plugin on server side
	I0122 21:27:08.155791  312064 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:27:08.155919  312064 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:27:08.156116  312064 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:27:08.156132  312064 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:27:08.201057  312064 main.go:141] libmachine: Making call to close driver server
	I0122 21:27:08.201095  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .Close
	I0122 21:27:08.201502  312064 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:27:08.201527  312064 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:27:08.960994  312064 pod_ready.go:103] pod "coredns-668d6bf9bc-28dbf" in "kube-system" namespace has status "Ready":"False"
	I0122 21:27:09.106356  312064 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.04839994s)
	I0122 21:27:09.106438  312064 main.go:141] libmachine: Making call to close driver server
	I0122 21:27:09.106460  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .Close
	I0122 21:27:09.106895  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | Closing plugin on server side
	I0122 21:27:09.106963  312064 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:27:09.106983  312064 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:27:09.106994  312064 main.go:141] libmachine: Making call to close driver server
	I0122 21:27:09.107002  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .Close
	I0122 21:27:09.107393  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | Closing plugin on server side
	I0122 21:27:09.107471  312064 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:27:09.107483  312064 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:27:09.107496  312064 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-991469"
	I0122 21:27:10.512965  312064 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.641474263s)
	I0122 21:27:10.513052  312064 main.go:141] libmachine: Making call to close driver server
	I0122 21:27:10.513072  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .Close
	I0122 21:27:10.513448  312064 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:27:10.513576  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | Closing plugin on server side
	I0122 21:27:10.513595  312064 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:27:10.513610  312064 main.go:141] libmachine: Making call to close driver server
	I0122 21:27:10.513619  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) Calling .Close
	I0122 21:27:10.513924  312064 main.go:141] libmachine: (default-k8s-diff-port-991469) DBG | Closing plugin on server side
	I0122 21:27:10.513993  312064 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:27:10.514007  312064 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:27:10.515636  312064 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-991469 addons enable metrics-server
	
	I0122 21:27:10.517316  312064 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0122 21:27:10.518707  312064 addons.go:514] duration metric: took 4.086247215s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0122 21:27:11.275974  312064 pod_ready.go:103] pod "coredns-668d6bf9bc-28dbf" in "kube-system" namespace has status "Ready":"False"
	I0122 21:27:12.781453  312064 pod_ready.go:93] pod "coredns-668d6bf9bc-28dbf" in "kube-system" namespace has status "Ready":"True"
	I0122 21:27:12.781495  312064 pod_ready.go:82] duration metric: took 6.015780777s for pod "coredns-668d6bf9bc-28dbf" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:12.781512  312064 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-8xm2c" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:12.798166  312064 pod_ready.go:93] pod "coredns-668d6bf9bc-8xm2c" in "kube-system" namespace has status "Ready":"True"
	I0122 21:27:12.798236  312064 pod_ready.go:82] duration metric: took 16.713263ms for pod "coredns-668d6bf9bc-8xm2c" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:12.798253  312064 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:13.309160  312064 pod_ready.go:93] pod "etcd-default-k8s-diff-port-991469" in "kube-system" namespace has status "Ready":"True"
	I0122 21:27:13.309211  312064 pod_ready.go:82] duration metric: took 510.946645ms for pod "etcd-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:13.309230  312064 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:13.824409  312064 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-991469" in "kube-system" namespace has status "Ready":"True"
	I0122 21:27:13.824439  312064 pod_ready.go:82] duration metric: took 515.200724ms for pod "kube-apiserver-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:13.824451  312064 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:13.832346  312064 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-991469" in "kube-system" namespace has status "Ready":"True"
	I0122 21:27:13.832386  312064 pod_ready.go:82] duration metric: took 7.92579ms for pod "kube-controller-manager-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:13.832404  312064 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-48rkl" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:13.971569  312064 pod_ready.go:93] pod "kube-proxy-48rkl" in "kube-system" namespace has status "Ready":"True"
	I0122 21:27:13.971601  312064 pod_ready.go:82] duration metric: took 139.187372ms for pod "kube-proxy-48rkl" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:13.971617  312064 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:14.370877  312064 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-991469" in "kube-system" namespace has status "Ready":"True"
	I0122 21:27:14.370907  312064 pod_ready.go:82] duration metric: took 399.280728ms for pod "kube-scheduler-default-k8s-diff-port-991469" in "kube-system" namespace to be "Ready" ...
	I0122 21:27:14.370920  312064 pod_ready.go:39] duration metric: took 7.616152586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:27:14.370946  312064 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:27:14.371016  312064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:14.438363  312064 api_server.go:72] duration metric: took 8.005985539s to wait for apiserver process to appear ...
	I0122 21:27:14.438397  312064 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:27:14.438423  312064 api_server.go:253] Checking apiserver healthz at https://192.168.61.98:8444/healthz ...
	I0122 21:27:14.446055  312064 api_server.go:279] https://192.168.61.98:8444/healthz returned 200:
	ok
	I0122 21:27:14.449970  312064 api_server.go:141] control plane version: v1.32.1
	I0122 21:27:14.450019  312064 api_server.go:131] duration metric: took 11.612521ms to wait for apiserver health ...
	I0122 21:27:14.450032  312064 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:27:14.576638  312064 system_pods.go:59] 9 kube-system pods found
	I0122 21:27:14.576690  312064 system_pods.go:61] "coredns-668d6bf9bc-28dbf" [d4a93a6c-6717-4152-a7db-42a8cd6786d6] Running
	I0122 21:27:14.576700  312064 system_pods.go:61] "coredns-668d6bf9bc-8xm2c" [ef56ed64-d524-4967-9f8c-eda485fd9902] Running
	I0122 21:27:14.576706  312064 system_pods.go:61] "etcd-default-k8s-diff-port-991469" [dbb1ef0c-a84e-4bd1-9dcc-db414d392edd] Running
	I0122 21:27:14.576712  312064 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991469" [e17b060d-3e32-44d3-bb4a-52d9f9da963c] Running
	I0122 21:27:14.576718  312064 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991469" [7dc258e8-6294-48c8-9bba-5ff1a966a0f8] Running
	I0122 21:27:14.576723  312064 system_pods.go:61] "kube-proxy-48rkl" [fa94f180-3afc-4823-8347-ade4af0075d5] Running
	I0122 21:27:14.576728  312064 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991469" [81d83701-f91e-4f2a-b04d-90a388fe9dbe] Running
	I0122 21:27:14.576737  312064 system_pods.go:61] "metrics-server-f79f97bbb-vsbtm" [81d12c97-93d0-4cfc-ab1f-b9e7b698b275] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:27:14.576743  312064 system_pods.go:61] "storage-provisioner" [eecb4eba-25e7-4a79-9e42-842137fa7606] Running
	I0122 21:27:14.576756  312064 system_pods.go:74] duration metric: took 126.714509ms to wait for pod list to return data ...
	I0122 21:27:14.576768  312064 default_sa.go:34] waiting for default service account to be created ...
	I0122 21:27:14.770963  312064 default_sa.go:45] found service account: "default"
	I0122 21:27:14.771000  312064 default_sa.go:55] duration metric: took 194.221431ms for default service account to be created ...
	I0122 21:27:14.771016  312064 system_pods.go:137] waiting for k8s-apps to be running ...
	I0122 21:27:14.976737  312064 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-991469 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991469 -n default-k8s-diff-port-991469
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-991469 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-991469 logs -n 25: (1.608973227s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p embed-certs-635179                 | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-181389        | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991469       | default-k8s-diff-port-991469 | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991469 | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC |                     |
	|         | default-k8s-diff-port-991469                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-181389                              | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC | 22 Jan 25 21:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-181389             | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC | 22 Jan 25 21:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-181389                              | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-635179 image list                          | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	| delete  | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	| start   | -p newest-cni-489789 --memory=2200 --alsologtostderr   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-489789             | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-489789                  | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-489789 --memory=2200 --alsologtostderr   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-489789 image list                           | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	| delete  | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	| delete  | -p old-k8s-version-181389                              | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:46 UTC | 22 Jan 25 21:46 UTC |
	| delete  | -p no-preload-806477                                   | no-preload-806477            | jenkins | v1.35.0 | 22 Jan 25 21:47 UTC | 22 Jan 25 21:47 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 21:27:23
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 21:27:23.911116  314650 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:27:23.911744  314650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:27:23.911765  314650 out.go:358] Setting ErrFile to fd 2...
	I0122 21:27:23.911774  314650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:27:23.912250  314650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:27:23.913222  314650 out.go:352] Setting JSON to false
	I0122 21:27:23.914762  314650 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":14990,"bootTime":1737566254,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:27:23.914894  314650 start.go:139] virtualization: kvm guest
	I0122 21:27:23.916750  314650 out.go:177] * [newest-cni-489789] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:27:23.918320  314650 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:27:23.918320  314650 notify.go:220] Checking for updates...
	I0122 21:27:23.920824  314650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:27:23.922296  314650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:23.923574  314650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:27:23.924769  314650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:27:23.926102  314650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:27:23.927578  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:23.928058  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:23.928125  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:23.944579  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34391
	I0122 21:27:23.945073  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:23.945640  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:23.945664  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:23.946073  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:23.946377  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:23.946689  314650 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:27:23.947048  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:23.947102  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:23.963420  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
	I0122 21:27:23.963873  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:23.964454  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:23.964502  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:23.964926  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:23.965154  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:24.005605  314650 out.go:177] * Using the kvm2 driver based on existing profile
	I0122 21:27:24.007129  314650 start.go:297] selected driver: kvm2
	I0122 21:27:24.007153  314650 start.go:901] validating driver "kvm2" against &{Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:24.007318  314650 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:27:24.008093  314650 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:27:24.008222  314650 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:27:24.024940  314650 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:27:24.025456  314650 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0122 21:27:24.025502  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:24.025549  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:24.025588  314650 start.go:340] cluster config:
	{Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:24.025695  314650 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:27:24.027752  314650 out.go:177] * Starting "newest-cni-489789" primary control-plane node in "newest-cni-489789" cluster
	I0122 21:27:24.029033  314650 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:27:24.029101  314650 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0122 21:27:24.029119  314650 cache.go:56] Caching tarball of preloaded images
	I0122 21:27:24.029287  314650 preload.go:172] Found /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0122 21:27:24.029306  314650 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0122 21:27:24.029475  314650 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/config.json ...
	I0122 21:27:24.029808  314650 start.go:360] acquireMachinesLock for newest-cni-489789: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:27:24.029874  314650 start.go:364] duration metric: took 34.85µs to acquireMachinesLock for "newest-cni-489789"
	I0122 21:27:24.029897  314650 start.go:96] Skipping create...Using existing machine configuration
	I0122 21:27:24.029908  314650 fix.go:54] fixHost starting: 
	I0122 21:27:24.030383  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:24.030486  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:24.046512  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I0122 21:27:24.047013  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:24.047605  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:24.047640  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:24.048047  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:24.048290  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:24.048464  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:24.050271  314650 fix.go:112] recreateIfNeeded on newest-cni-489789: state=Stopped err=<nil>
	I0122 21:27:24.050304  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	W0122 21:27:24.050473  314650 fix.go:138] unexpected machine state, will restart: <nil>
	I0122 21:27:24.052496  314650 out.go:177] * Restarting existing kvm2 VM for "newest-cni-489789" ...
	I0122 21:27:21.730303  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:21.747123  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:21.747212  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:21.793769  312675 cri.go:89] found id: ""
	I0122 21:27:21.793807  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.793827  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:21.793835  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:21.793912  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:21.840045  312675 cri.go:89] found id: ""
	I0122 21:27:21.840088  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.840101  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:21.840109  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:21.840187  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:21.885265  312675 cri.go:89] found id: ""
	I0122 21:27:21.885302  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.885314  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:21.885323  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:21.885404  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:21.937734  312675 cri.go:89] found id: ""
	I0122 21:27:21.937768  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.937777  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:21.937783  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:21.937844  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:21.989238  312675 cri.go:89] found id: ""
	I0122 21:27:21.989276  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.989294  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:21.989300  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:21.989377  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:22.035837  312675 cri.go:89] found id: ""
	I0122 21:27:22.035921  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.035934  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:22.035944  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:22.036016  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:22.091690  312675 cri.go:89] found id: ""
	I0122 21:27:22.091731  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.091745  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:22.091754  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:22.091828  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:22.149775  312675 cri.go:89] found id: ""
	I0122 21:27:22.149888  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.149913  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:22.149958  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:22.150005  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:22.213610  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:22.213665  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:22.233970  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:22.234014  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:22.318579  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:22.318606  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:22.318622  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:22.422850  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:22.422899  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:24.974063  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:24.990751  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:24.990850  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:25.036044  312675 cri.go:89] found id: ""
	I0122 21:27:25.036082  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.036094  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:25.036103  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:25.036173  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:25.078700  312675 cri.go:89] found id: ""
	I0122 21:27:25.078736  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.078748  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:25.078759  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:25.078829  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:25.134919  312675 cri.go:89] found id: ""
	I0122 21:27:25.134971  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.134984  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:25.134994  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:25.135075  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:25.183649  312675 cri.go:89] found id: ""
	I0122 21:27:25.183684  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.183695  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:25.183704  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:25.183778  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:25.240357  312675 cri.go:89] found id: ""
	I0122 21:27:25.240401  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.240414  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:25.240425  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:25.240555  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:25.284093  312675 cri.go:89] found id: ""
	I0122 21:27:25.284132  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.284141  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:25.284149  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:25.284218  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:25.328590  312675 cri.go:89] found id: ""
	I0122 21:27:25.328621  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.328632  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:25.328641  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:25.328710  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:25.378479  312675 cri.go:89] found id: ""
	I0122 21:27:25.378517  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.378529  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:25.378543  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:25.378559  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:25.433767  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:25.433800  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:24.053834  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Start
	I0122 21:27:24.054152  314650 main.go:141] libmachine: (newest-cni-489789) starting domain...
	I0122 21:27:24.054175  314650 main.go:141] libmachine: (newest-cni-489789) ensuring networks are active...
	I0122 21:27:24.055132  314650 main.go:141] libmachine: (newest-cni-489789) Ensuring network default is active
	I0122 21:27:24.055534  314650 main.go:141] libmachine: (newest-cni-489789) Ensuring network mk-newest-cni-489789 is active
	I0122 21:27:24.055963  314650 main.go:141] libmachine: (newest-cni-489789) getting domain XML...
	I0122 21:27:24.056886  314650 main.go:141] libmachine: (newest-cni-489789) creating domain...
	I0122 21:27:25.457503  314650 main.go:141] libmachine: (newest-cni-489789) waiting for IP...
	I0122 21:27:25.458754  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:25.459431  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:25.459544  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:25.459394  314684 retry.go:31] will retry after 258.579884ms: waiting for domain to come up
	I0122 21:27:25.720098  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:25.720657  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:25.720704  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:25.720649  314684 retry.go:31] will retry after 347.192205ms: waiting for domain to come up
	I0122 21:27:26.069095  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.069843  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.069880  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.069813  314684 retry.go:31] will retry after 318.422908ms: waiting for domain to come up
	I0122 21:27:26.390692  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.391374  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.391431  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.391350  314684 retry.go:31] will retry after 516.847382ms: waiting for domain to come up
	I0122 21:27:26.910252  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.910831  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.910862  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.910801  314684 retry.go:31] will retry after 657.195872ms: waiting for domain to come up
	I0122 21:27:27.569972  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:27.570617  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:27.570651  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:27.570590  314684 retry.go:31] will retry after 601.660948ms: waiting for domain to come up
	I0122 21:27:28.173427  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:28.174022  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:28.174065  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:28.173988  314684 retry.go:31] will retry after 839.292486ms: waiting for domain to come up
	I0122 21:27:25.497717  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:25.497767  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:25.530904  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:25.530961  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:25.631676  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:25.631701  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:25.631717  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:28.221852  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:28.236702  312675 kubeadm.go:597] duration metric: took 4m3.036103838s to restartPrimaryControlPlane
	W0122 21:27:28.236803  312675 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0122 21:27:28.236837  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:27:29.014929  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:29.015535  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:29.015569  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:29.015501  314684 retry.go:31] will retry after 1.28366543s: waiting for domain to come up
	I0122 21:27:30.300346  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:30.300806  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:30.300834  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:30.300775  314684 retry.go:31] will retry after 1.437378164s: waiting for domain to come up
	I0122 21:27:31.739437  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:31.740073  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:31.740106  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:31.740043  314684 retry.go:31] will retry after 1.547235719s: waiting for domain to come up
	I0122 21:27:33.289857  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:33.290395  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:33.290452  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:33.290357  314684 retry.go:31] will retry after 2.864838858s: waiting for domain to come up
	I0122 21:27:30.647940  312675 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.411072952s)
	I0122 21:27:30.648042  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:27:30.669610  312675 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:27:30.684678  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:27:30.698168  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:27:30.698232  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:27:30.698285  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:27:30.708774  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:27:30.708855  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:27:30.720213  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:27:30.731121  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:27:30.731207  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:27:30.743153  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:27:30.754160  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:27:30.754262  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:27:30.765730  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:27:30.776902  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:27:30.776990  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:27:30.788361  312675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:27:31.040925  312675 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:27:36.157916  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:36.158675  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:36.158706  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:36.158608  314684 retry.go:31] will retry after 3.253566336s: waiting for domain to come up
	I0122 21:27:39.413761  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:39.414347  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:39.414380  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:39.414310  314684 retry.go:31] will retry after 3.952766125s: waiting for domain to come up
	I0122 21:27:43.371406  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.371943  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has current primary IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.371999  314650 main.go:141] libmachine: (newest-cni-489789) found domain IP: 192.168.50.146
	I0122 21:27:43.372024  314650 main.go:141] libmachine: (newest-cni-489789) reserving static IP address...
	I0122 21:27:43.372454  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "newest-cni-489789", mac: "52:54:00:c5:b4:d9", ip: "192.168.50.146"} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.372482  314650 main.go:141] libmachine: (newest-cni-489789) DBG | skip adding static IP to network mk-newest-cni-489789 - found existing host DHCP lease matching {name: "newest-cni-489789", mac: "52:54:00:c5:b4:d9", ip: "192.168.50.146"}
	I0122 21:27:43.372502  314650 main.go:141] libmachine: (newest-cni-489789) reserved static IP address 192.168.50.146 for domain newest-cni-489789
	I0122 21:27:43.372516  314650 main.go:141] libmachine: (newest-cni-489789) waiting for SSH...
	I0122 21:27:43.372527  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Getting to WaitForSSH function...
	I0122 21:27:43.374698  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.374984  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.375016  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.375148  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Using SSH client type: external
	I0122 21:27:43.375173  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa (-rw-------)
	I0122 21:27:43.375212  314650 main.go:141] libmachine: (newest-cni-489789) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:27:43.375232  314650 main.go:141] libmachine: (newest-cni-489789) DBG | About to run SSH command:
	I0122 21:27:43.375243  314650 main.go:141] libmachine: (newest-cni-489789) DBG | exit 0
	I0122 21:27:43.503039  314650 main.go:141] libmachine: (newest-cni-489789) DBG | SSH cmd err, output: <nil>: 
	I0122 21:27:43.503449  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetConfigRaw
	I0122 21:27:43.504138  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:43.507198  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.507562  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.507607  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.507876  314650 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/config.json ...
	I0122 21:27:43.508166  314650 machine.go:93] provisionDockerMachine start ...
	I0122 21:27:43.508196  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:43.508518  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.511111  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.511408  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.511442  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.511632  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.511842  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.512002  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.512147  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.512352  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.512624  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.512643  314650 main.go:141] libmachine: About to run SSH command:
	hostname
	I0122 21:27:43.619425  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0122 21:27:43.619472  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.619742  314650 buildroot.go:166] provisioning hostname "newest-cni-489789"
	I0122 21:27:43.619772  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.619998  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.622781  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.623242  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.623285  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.623505  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.623728  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.623892  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.624013  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.624154  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.624410  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.624432  314650 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-489789 && echo "newest-cni-489789" | sudo tee /etc/hostname
	I0122 21:27:43.747575  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-489789
	
	I0122 21:27:43.747605  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.750745  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.751080  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.751127  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.751553  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.751775  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.751918  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.752035  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.752185  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.752425  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.752465  314650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-489789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-489789/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-489789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:27:43.865258  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:27:43.865290  314650 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:27:43.865312  314650 buildroot.go:174] setting up certificates
	I0122 21:27:43.865327  314650 provision.go:84] configureAuth start
	I0122 21:27:43.865362  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.865704  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:43.868648  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.868993  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.869025  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.869222  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.871572  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.871860  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.871894  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.872044  314650 provision.go:143] copyHostCerts
	I0122 21:27:43.872109  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:27:43.872130  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:27:43.872205  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:27:43.872312  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:27:43.872321  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:27:43.872346  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:27:43.872433  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:27:43.872447  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:27:43.872471  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:27:43.872536  314650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.newest-cni-489789 san=[127.0.0.1 192.168.50.146 localhost minikube newest-cni-489789]
	I0122 21:27:44.234481  314650 provision.go:177] copyRemoteCerts
	I0122 21:27:44.234579  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:27:44.234618  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.237848  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.238297  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.238332  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.238604  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.238788  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.238988  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.239154  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.326083  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:27:44.355837  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0122 21:27:44.387644  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 21:27:44.418003  314650 provision.go:87] duration metric: took 552.65522ms to configureAuth
	I0122 21:27:44.418039  314650 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:27:44.418347  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:44.418475  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.421349  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.421796  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.421839  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.422067  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.422301  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.422470  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.422603  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.422810  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:44.423129  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:44.423156  314650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:27:44.671197  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:27:44.671232  314650 machine.go:96] duration metric: took 1.163046458s to provisionDockerMachine
	I0122 21:27:44.671247  314650 start.go:293] postStartSetup for "newest-cni-489789" (driver="kvm2")
	I0122 21:27:44.671261  314650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:27:44.671289  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.671667  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:27:44.671704  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.674811  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.675137  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.675164  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.675350  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.675624  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.675817  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.675987  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.759194  314650 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:27:44.764553  314650 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:27:44.764591  314650 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:27:44.764668  314650 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:27:44.764741  314650 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:27:44.764835  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:27:44.778239  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:27:44.807409  314650 start.go:296] duration metric: took 136.131239ms for postStartSetup
	I0122 21:27:44.807474  314650 fix.go:56] duration metric: took 20.777566838s for fixHost
	I0122 21:27:44.807580  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.810883  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.811279  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.811312  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.811472  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.811736  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.811908  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.812086  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.812268  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:44.812448  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:44.812459  314650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:27:44.915903  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737581264.870208902
	
	I0122 21:27:44.915934  314650 fix.go:216] guest clock: 1737581264.870208902
	I0122 21:27:44.915945  314650 fix.go:229] Guest: 2025-01-22 21:27:44.870208902 +0000 UTC Remote: 2025-01-22 21:27:44.807479632 +0000 UTC m=+20.941890306 (delta=62.72927ms)
	I0122 21:27:44.915983  314650 fix.go:200] guest clock delta is within tolerance: 62.72927ms
	I0122 21:27:44.915991  314650 start.go:83] releasing machines lock for "newest-cni-489789", held for 20.886101347s
	I0122 21:27:44.916019  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.916292  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:44.919374  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.919795  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.919831  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.920026  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.920725  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.920966  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.921087  314650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:27:44.921144  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.921271  314650 ssh_runner.go:195] Run: cat /version.json
	I0122 21:27:44.921303  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.924275  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924511  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924546  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.924566  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924759  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.924827  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.924871  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924995  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.925090  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.925199  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.925283  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.925319  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.925420  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.925532  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:45.025072  314650 ssh_runner.go:195] Run: systemctl --version
	I0122 21:27:45.032652  314650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:27:45.187726  314650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:27:45.194767  314650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:27:45.194851  314650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:27:45.213610  314650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:27:45.213644  314650 start.go:495] detecting cgroup driver to use...
	I0122 21:27:45.213723  314650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:27:45.231803  314650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:27:45.247682  314650 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:27:45.247801  314650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:27:45.263581  314650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:27:45.279536  314650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:27:45.406663  314650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:27:45.562297  314650 docker.go:233] disabling docker service ...
	I0122 21:27:45.562383  314650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:27:45.579904  314650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:27:45.595144  314650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:27:45.739957  314650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:27:45.866024  314650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:27:45.882728  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:27:45.907297  314650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0122 21:27:45.907388  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.920271  314650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:27:45.920341  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.933095  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.945711  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.958348  314650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:27:45.972409  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.989090  314650 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:46.011819  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:46.025229  314650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:27:46.038393  314650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:27:46.038475  314650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:27:46.055252  314650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:27:46.068173  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:46.196285  314650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:27:46.295821  314650 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:27:46.295921  314650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:27:46.301506  314650 start.go:563] Will wait 60s for crictl version
	I0122 21:27:46.301587  314650 ssh_runner.go:195] Run: which crictl
	I0122 21:27:46.306074  314650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:27:46.352624  314650 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:27:46.352727  314650 ssh_runner.go:195] Run: crio --version
	I0122 21:27:46.385398  314650 ssh_runner.go:195] Run: crio --version
	I0122 21:27:46.422040  314650 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0122 21:27:46.423591  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:46.426902  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:46.427305  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:46.427332  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:46.427679  314650 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0122 21:27:46.432609  314650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:27:46.448941  314650 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0122 21:27:46.450413  314650 kubeadm.go:883] updating cluster {Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:27:46.450575  314650 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:27:46.450683  314650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:27:46.496073  314650 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0122 21:27:46.496165  314650 ssh_runner.go:195] Run: which lz4
	I0122 21:27:46.500895  314650 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:27:46.505854  314650 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:27:46.505909  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0122 21:27:48.159588  314650 crio.go:462] duration metric: took 1.658732075s to copy over tarball
	I0122 21:27:48.159687  314650 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:27:50.643587  314650 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.483861806s)
	I0122 21:27:50.643623  314650 crio.go:469] duration metric: took 2.483996867s to extract the tarball
	I0122 21:27:50.643632  314650 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:27:50.683708  314650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:27:50.732147  314650 crio.go:514] all images are preloaded for cri-o runtime.
	I0122 21:27:50.732183  314650 cache_images.go:84] Images are preloaded, skipping loading
	I0122 21:27:50.732194  314650 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.32.1 crio true true} ...
	I0122 21:27:50.732350  314650 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-489789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:27:50.732425  314650 ssh_runner.go:195] Run: crio config
	I0122 21:27:50.789877  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:50.789904  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:50.789920  314650 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0122 21:27:50.789953  314650 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-489789 NodeName:newest-cni-489789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:27:50.790132  314650 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-489789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.146"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:27:50.790261  314650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:27:50.801652  314650 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:27:50.801742  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:27:50.813168  314650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0122 21:27:50.832707  314650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:27:50.852375  314650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0122 21:27:50.875185  314650 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I0122 21:27:50.879818  314650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:27:50.893992  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:51.040056  314650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:27:51.060681  314650 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789 for IP: 192.168.50.146
	I0122 21:27:51.060711  314650 certs.go:194] generating shared ca certs ...
	I0122 21:27:51.060737  314650 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:51.060940  314650 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:27:51.061018  314650 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:27:51.061036  314650 certs.go:256] generating profile certs ...
	I0122 21:27:51.061157  314650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/client.key
	I0122 21:27:51.061251  314650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.key.de28c3d3
	I0122 21:27:51.061317  314650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.key
	I0122 21:27:51.061482  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:27:51.061526  314650 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:27:51.061539  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:27:51.061572  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:27:51.061603  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:27:51.061636  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:27:51.061692  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:27:51.062633  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:27:51.098858  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:27:51.145243  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:27:51.180019  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:27:51.208916  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0122 21:27:51.237139  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:27:51.270858  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:27:51.306734  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:27:51.341424  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:27:51.370701  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:27:51.402552  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:27:51.431817  314650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:27:51.452816  314650 ssh_runner.go:195] Run: openssl version
	I0122 21:27:51.460223  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:27:51.474716  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.480785  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.480874  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.489093  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:27:51.501870  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:27:51.514659  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.520559  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.520713  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.527928  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:27:51.541856  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:27:51.555463  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.561295  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.561368  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.568531  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:27:51.584716  314650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:27:51.590762  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 21:27:51.598592  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 21:27:51.605666  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 21:27:51.613414  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 21:27:51.621894  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 21:27:51.629916  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 21:27:51.636995  314650 kubeadm.go:392] StartCluster: {Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:51.637138  314650 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:27:51.637358  314650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:27:51.691610  314650 cri.go:89] found id: ""
	I0122 21:27:51.691683  314650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:27:51.703943  314650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0122 21:27:51.703976  314650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0122 21:27:51.704044  314650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0122 21:27:51.715920  314650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0122 21:27:51.716767  314650 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-489789" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:51.717203  314650 kubeconfig.go:62] /home/jenkins/minikube-integration/20288-247142/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-489789" cluster setting kubeconfig missing "newest-cni-489789" context setting]
	I0122 21:27:51.717901  314650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:51.729230  314650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0122 21:27:51.741794  314650 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.146
	I0122 21:27:51.741842  314650 kubeadm.go:1160] stopping kube-system containers ...
	I0122 21:27:51.741859  314650 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0122 21:27:51.741927  314650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:27:51.789068  314650 cri.go:89] found id: ""
	I0122 21:27:51.789171  314650 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0122 21:27:51.809451  314650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:27:51.821492  314650 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:27:51.821515  314650 kubeadm.go:157] found existing configuration files:
	
	I0122 21:27:51.821564  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:27:51.833428  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:27:51.833507  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:27:51.845423  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:27:51.856151  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:27:51.856247  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:27:51.868260  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:27:51.879595  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:27:51.879671  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:27:51.892482  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:27:51.905485  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:27:51.905558  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:27:51.917498  314650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:27:51.930487  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:52.072199  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.069420  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.321398  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.393577  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.471920  314650 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:27:53.472027  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:53.972577  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.472481  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.972531  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.989674  314650 api_server.go:72] duration metric: took 1.517756303s to wait for apiserver process to appear ...
	I0122 21:27:54.989707  314650 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:27:54.989729  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.208473  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:27:57.208515  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:27:57.208536  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.292726  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:27:57.292780  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:27:57.490170  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.499620  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:57.499655  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:57.990312  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.998214  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:57.998257  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:58.489875  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:58.496876  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:58.496913  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:58.990600  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:58.995909  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 200:
	ok
	I0122 21:27:59.004894  314650 api_server.go:141] control plane version: v1.32.1
	I0122 21:27:59.004943  314650 api_server.go:131] duration metric: took 4.015227175s to wait for apiserver health ...
	I0122 21:27:59.004977  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:59.004987  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:59.006689  314650 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:27:59.008029  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:27:59.020070  314650 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:27:59.044659  314650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:27:59.055648  314650 system_pods.go:59] 8 kube-system pods found
	I0122 21:27:59.055702  314650 system_pods.go:61] "coredns-668d6bf9bc-j4plt" [148d05e6-8770-4af7-bdbe-cd5a5f8dd29f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:27:59.055713  314650 system_pods.go:61] "etcd-newest-cni-489789" [c8170cf7-3a96-44e4-b00e-18d85c1b7986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:27:59.055724  314650 system_pods.go:61] "kube-apiserver-newest-cni-489789" [6ffe2038-7158-4e18-b918-97456a0a041d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:27:59.055732  314650 system_pods.go:61] "kube-controller-manager-newest-cni-489789" [b725f80f-9d41-4128-8d21-fe71b2b2279e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:27:59.055738  314650 system_pods.go:61] "kube-proxy-ln878" [010174ac-4a25-4a32-bc4b-18e7f04b94c8] Running
	I0122 21:27:59.055754  314650 system_pods.go:61] "kube-scheduler-newest-cni-489789" [3b8995ec-114b-4e51-94bf-f38ec3c2a1fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:27:59.055766  314650 system_pods.go:61] "metrics-server-f79f97bbb-hwz7d" [93786d6e-095b-4543-9a36-eb57b54ab6b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:27:59.055774  314650 system_pods.go:61] "storage-provisioner" [9d443319-6b6b-446a-a3cb-242157e85a55] Running
	I0122 21:27:59.055788  314650 system_pods.go:74] duration metric: took 11.091605ms to wait for pod list to return data ...
	I0122 21:27:59.055802  314650 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:27:59.060105  314650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:27:59.060148  314650 node_conditions.go:123] node cpu capacity is 2
	I0122 21:27:59.060164  314650 node_conditions.go:105] duration metric: took 4.355866ms to run NodePressure ...
	I0122 21:27:59.060188  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:59.384018  314650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 21:27:59.398090  314650 ops.go:34] apiserver oom_adj: -16
	I0122 21:27:59.398128  314650 kubeadm.go:597] duration metric: took 7.694142189s to restartPrimaryControlPlane
	I0122 21:27:59.398142  314650 kubeadm.go:394] duration metric: took 7.761160632s to StartCluster
	I0122 21:27:59.398170  314650 settings.go:142] acquiring lock: {Name:mkd1753661c2351dd9318eb8eab12d9164b6fe23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:59.398290  314650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:59.400046  314650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:59.400419  314650 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 21:27:59.400556  314650 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0122 21:27:59.400665  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:59.400686  314650 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-489789"
	I0122 21:27:59.400707  314650 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-489789"
	W0122 21:27:59.400716  314650 addons.go:247] addon storage-provisioner should already be in state true
	I0122 21:27:59.400726  314650 addons.go:69] Setting default-storageclass=true in profile "newest-cni-489789"
	I0122 21:27:59.400741  314650 addons.go:69] Setting dashboard=true in profile "newest-cni-489789"
	I0122 21:27:59.400761  314650 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-489789"
	I0122 21:27:59.400768  314650 addons.go:238] Setting addon dashboard=true in "newest-cni-489789"
	W0122 21:27:59.400778  314650 addons.go:247] addon dashboard should already be in state true
	I0122 21:27:59.400815  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.400765  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.401204  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401204  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401235  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.401237  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401262  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.401321  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.400718  314650 addons.go:69] Setting metrics-server=true in profile "newest-cni-489789"
	I0122 21:27:59.401464  314650 addons.go:238] Setting addon metrics-server=true in "newest-cni-489789"
	W0122 21:27:59.401475  314650 addons.go:247] addon metrics-server should already be in state true
	I0122 21:27:59.401509  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.401887  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401975  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.402025  314650 out.go:177] * Verifying Kubernetes components...
	I0122 21:27:59.403359  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:59.420697  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0122 21:27:59.420697  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0122 21:27:59.421021  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0122 21:27:59.421349  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.421459  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.421547  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.422098  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422121  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422122  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422144  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422325  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422349  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422401  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
	I0122 21:27:59.423146  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423151  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423148  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423359  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.423430  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.423817  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.423841  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.423816  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.423882  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.424405  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.425054  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.425105  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.425288  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.425335  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.427261  314650 addons.go:238] Setting addon default-storageclass=true in "newest-cni-489789"
	W0122 21:27:59.427282  314650 addons.go:247] addon default-storageclass should already be in state true
	I0122 21:27:59.427316  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.427674  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.427723  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.446713  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43103
	I0122 21:27:59.446783  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I0122 21:27:59.451272  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.451373  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.451946  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.451969  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.452101  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.452121  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.452538  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.452791  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.452801  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.453414  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.455400  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.455881  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.457716  314650 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0122 21:27:59.457751  314650 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0122 21:27:59.459475  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0122 21:27:59.459504  314650 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0122 21:27:59.459539  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.460864  314650 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0122 21:27:59.462275  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0122 21:27:59.462311  314650 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0122 21:27:59.462354  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.466673  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.467509  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.467541  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.467851  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.468096  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.468288  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.468589  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.468600  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.469258  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.469308  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.469497  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.469679  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.469875  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.470056  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.473781  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0122 21:27:59.473966  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39141
	I0122 21:27:59.474357  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.474615  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.474910  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.474936  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.475242  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.475262  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.475362  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.475908  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.475957  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.476056  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.476285  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.478535  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.480540  314650 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:27:59.481982  314650 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:27:59.482013  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 21:27:59.482045  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.485683  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.486142  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.486177  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.486465  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.486710  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.486889  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.487038  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.494246  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0122 21:27:59.494801  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.495426  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.495453  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.495905  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.496130  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.498296  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.498565  314650 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 21:27:59.498586  314650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 21:27:59.498611  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.501861  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.502313  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.502346  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.502646  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.502865  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.503077  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.503233  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.724824  314650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:27:59.770671  314650 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:27:59.770782  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:59.794707  314650 api_server.go:72] duration metric: took 394.235725ms to wait for apiserver process to appear ...
	I0122 21:27:59.794739  314650 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:27:59.794764  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:59.830916  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 200:
	ok
	I0122 21:27:59.833823  314650 api_server.go:141] control plane version: v1.32.1
	I0122 21:27:59.833866  314650 api_server.go:131] duration metric: took 39.117571ms to wait for apiserver health ...
	I0122 21:27:59.833879  314650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:27:59.842548  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:27:59.866014  314650 system_pods.go:59] 8 kube-system pods found
	I0122 21:27:59.866078  314650 system_pods.go:61] "coredns-668d6bf9bc-j4plt" [148d05e6-8770-4af7-bdbe-cd5a5f8dd29f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:27:59.866091  314650 system_pods.go:61] "etcd-newest-cni-489789" [c8170cf7-3a96-44e4-b00e-18d85c1b7986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:27:59.866103  314650 system_pods.go:61] "kube-apiserver-newest-cni-489789" [6ffe2038-7158-4e18-b918-97456a0a041d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:27:59.866113  314650 system_pods.go:61] "kube-controller-manager-newest-cni-489789" [b725f80f-9d41-4128-8d21-fe71b2b2279e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:27:59.866119  314650 system_pods.go:61] "kube-proxy-ln878" [010174ac-4a25-4a32-bc4b-18e7f04b94c8] Running
	I0122 21:27:59.866128  314650 system_pods.go:61] "kube-scheduler-newest-cni-489789" [3b8995ec-114b-4e51-94bf-f38ec3c2a1fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:27:59.866137  314650 system_pods.go:61] "metrics-server-f79f97bbb-hwz7d" [93786d6e-095b-4543-9a36-eb57b54ab6b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:27:59.866143  314650 system_pods.go:61] "storage-provisioner" [9d443319-6b6b-446a-a3cb-242157e85a55] Running
	I0122 21:27:59.866152  314650 system_pods.go:74] duration metric: took 32.265403ms to wait for pod list to return data ...
	I0122 21:27:59.866168  314650 default_sa.go:34] waiting for default service account to be created ...
	I0122 21:27:59.871064  314650 default_sa.go:45] found service account: "default"
	I0122 21:27:59.871106  314650 default_sa.go:55] duration metric: took 4.928382ms for default service account to be created ...
	I0122 21:27:59.871125  314650 kubeadm.go:582] duration metric: took 470.664674ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0122 21:27:59.871157  314650 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:27:59.875089  314650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:27:59.875125  314650 node_conditions.go:123] node cpu capacity is 2
	I0122 21:27:59.875139  314650 node_conditions.go:105] duration metric: took 3.96814ms to run NodePressure ...
	I0122 21:27:59.875155  314650 start.go:241] waiting for startup goroutines ...
	I0122 21:27:59.879100  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 21:27:59.991147  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0122 21:27:59.991183  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0122 21:28:00.010416  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0122 21:28:00.010448  314650 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0122 21:28:00.034463  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0122 21:28:00.034502  314650 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0122 21:28:00.066923  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:28:00.066963  314650 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0122 21:28:00.112671  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:28:00.155556  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0122 21:28:00.155594  314650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0122 21:28:00.224676  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0122 21:28:00.224717  314650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0122 21:28:00.402769  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0122 21:28:00.402799  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0122 21:28:00.611017  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0122 21:28:00.611060  314650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0122 21:28:00.746957  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0122 21:28:00.747012  314650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0122 21:28:00.817833  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0122 21:28:00.817864  314650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0122 21:28:00.905629  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0122 21:28:00.905658  314650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0122 21:28:00.973450  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:28:00.973488  314650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0122 21:28:01.033649  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:28:01.902642  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.023480792s)
	I0122 21:28:01.902735  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.902750  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.902850  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.060261694s)
	I0122 21:28:01.902903  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.902915  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.904921  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.904989  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.904996  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905018  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.905027  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.905036  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.905033  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905093  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.905102  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.905104  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905492  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905513  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905534  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.905540  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905567  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905581  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.914609  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.914638  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.914975  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.915021  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.915036  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.003384  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.890658634s)
	I0122 21:28:02.003466  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.003495  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.003851  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:02.003914  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.003943  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.003952  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.003960  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.004229  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.004247  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.004261  314650 addons.go:479] Verifying addon metrics-server=true in "newest-cni-489789"
	I0122 21:28:02.891241  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.857486932s)
	I0122 21:28:02.891533  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.891588  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.894087  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:02.894100  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.894130  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.894140  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.894149  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.894509  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.894564  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.896533  314650 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-489789 addons enable metrics-server
	
	I0122 21:28:02.898219  314650 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0122 21:28:02.900518  314650 addons.go:514] duration metric: took 3.499959979s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0122 21:28:02.900586  314650 start.go:246] waiting for cluster config update ...
	I0122 21:28:02.900604  314650 start.go:255] writing updated cluster config ...
	I0122 21:28:02.900904  314650 ssh_runner.go:195] Run: rm -f paused
	I0122 21:28:02.965147  314650 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0122 21:28:02.967085  314650 out.go:177] * Done! kubectl is now configured to use "newest-cni-489789" cluster and "default" namespace by default
	I0122 21:29:27.087272  312675 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:29:27.087393  312675 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:29:27.089567  312675 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:29:27.089666  312675 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:29:27.089781  312675 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:29:27.089958  312675 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:29:27.090084  312675 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:29:27.090165  312675 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:29:27.092167  312675 out.go:235]   - Generating certificates and keys ...
	I0122 21:29:27.092283  312675 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:29:27.092358  312675 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:29:27.092462  312675 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:29:27.092535  312675 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:29:27.092611  312675 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:29:27.092682  312675 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:29:27.092771  312675 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:29:27.092848  312675 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:29:27.092976  312675 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:29:27.093094  312675 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:29:27.093166  312675 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:29:27.093261  312675 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:29:27.093350  312675 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:29:27.093398  312675 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:29:27.093476  312675 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:29:27.093559  312675 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:29:27.093650  312675 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:29:27.093720  312675 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:29:27.093761  312675 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:29:27.093818  312675 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:29:27.095338  312675 out.go:235]   - Booting up control plane ...
	I0122 21:29:27.095468  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:29:27.095555  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:29:27.095632  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:29:27.095710  312675 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:29:27.095838  312675 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:29:27.095878  312675 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:29:27.095937  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096106  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096195  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096453  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096565  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096796  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096867  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.097090  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.097177  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.097367  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.097386  312675 kubeadm.go:310] 
	I0122 21:29:27.097443  312675 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:29:27.097512  312675 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:29:27.097527  312675 kubeadm.go:310] 
	I0122 21:29:27.097557  312675 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:29:27.097611  312675 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:29:27.097761  312675 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:29:27.097783  312675 kubeadm.go:310] 
	I0122 21:29:27.097878  312675 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:29:27.097928  312675 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:29:27.097955  312675 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:29:27.097962  312675 kubeadm.go:310] 
	I0122 21:29:27.098055  312675 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:29:27.098120  312675 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:29:27.098127  312675 kubeadm.go:310] 
	I0122 21:29:27.098272  312675 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:29:27.098357  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:29:27.098434  312675 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:29:27.098533  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:29:27.098585  312675 kubeadm.go:310] 
	W0122 21:29:27.098687  312675 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0122 21:29:27.098731  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:29:27.599261  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:29:27.617267  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:29:27.629164  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:29:27.629190  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:29:27.629255  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:29:27.641001  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:29:27.641072  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:29:27.653446  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:29:27.666334  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:29:27.666426  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:29:27.678551  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:29:27.689687  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:29:27.689757  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:29:27.702030  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:29:27.713507  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:29:27.713585  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:29:27.726067  312675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:29:27.816417  312675 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:29:27.816555  312675 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:29:27.995432  312675 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:29:27.995599  312675 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:29:27.995745  312675 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:29:28.218104  312675 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:29:28.220056  312675 out.go:235]   - Generating certificates and keys ...
	I0122 21:29:28.220190  312675 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:29:28.220278  312675 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:29:28.220383  312675 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:29:28.220486  312675 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:29:28.220573  312675 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:29:28.220648  312675 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:29:28.220880  312675 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:29:28.221175  312675 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:29:28.222058  312675 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:29:28.222351  312675 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:29:28.222442  312675 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:29:28.222530  312675 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:29:28.304455  312675 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:29:28.572192  312675 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:29:28.869356  312675 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:29:29.053609  312675 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:29:29.082264  312675 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:29:29.082429  312675 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:29:29.082503  312675 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:29:29.253931  312675 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:29:29.256894  312675 out.go:235]   - Booting up control plane ...
	I0122 21:29:29.257044  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:29:29.267513  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:29:29.269154  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:29:29.270276  312675 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:29:29.274228  312675 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:30:09.277116  312675 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:30:09.277238  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:09.277504  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:14.278173  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:14.278454  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:24.278945  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:24.279149  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:44.279492  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:44.279715  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:31:24.278351  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:31:24.278612  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:31:24.278628  312675 kubeadm.go:310] 
	I0122 21:31:24.278664  312675 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:31:24.278723  312675 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:31:24.278735  312675 kubeadm.go:310] 
	I0122 21:31:24.278775  312675 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:31:24.278827  312675 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:31:24.278956  312675 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:31:24.278981  312675 kubeadm.go:310] 
	I0122 21:31:24.279066  312675 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:31:24.279109  312675 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:31:24.279140  312675 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:31:24.279147  312675 kubeadm.go:310] 
	I0122 21:31:24.279253  312675 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:31:24.279353  312675 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:31:24.279373  312675 kubeadm.go:310] 
	I0122 21:31:24.279516  312675 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:31:24.279639  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:31:24.279754  312675 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:31:24.279837  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:31:24.279895  312675 kubeadm.go:310] 
	I0122 21:31:24.280842  312675 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:31:24.280984  312675 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:31:24.281074  312675 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:31:24.281148  312675 kubeadm.go:394] duration metric: took 7m59.138107768s to StartCluster
	I0122 21:31:24.281220  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:31:24.281302  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:31:24.331184  312675 cri.go:89] found id: ""
	I0122 21:31:24.331225  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.331235  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:31:24.331242  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:31:24.331309  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:31:24.372934  312675 cri.go:89] found id: ""
	I0122 21:31:24.372963  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.372972  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:31:24.372979  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:31:24.373034  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:31:24.413239  312675 cri.go:89] found id: ""
	I0122 21:31:24.413274  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.413284  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:31:24.413290  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:31:24.413347  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:31:24.452513  312675 cri.go:89] found id: ""
	I0122 21:31:24.452552  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.452564  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:31:24.452573  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:31:24.452644  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:31:24.491580  312675 cri.go:89] found id: ""
	I0122 21:31:24.491617  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.491629  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:31:24.491637  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:31:24.491710  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:31:24.544823  312675 cri.go:89] found id: ""
	I0122 21:31:24.544856  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.544865  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:31:24.544872  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:31:24.544930  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:31:24.585047  312675 cri.go:89] found id: ""
	I0122 21:31:24.585085  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.585099  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:31:24.585108  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:31:24.585175  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:31:24.624152  312675 cri.go:89] found id: ""
	I0122 21:31:24.624189  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.624201  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:31:24.624216  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:31:24.624231  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:31:24.717945  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:31:24.717971  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:31:24.717989  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:31:24.826216  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:31:24.826260  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:31:24.878403  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:31:24.878439  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:31:24.931058  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:31:24.931102  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0122 21:31:24.947080  312675 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0122 21:31:24.947171  312675 out.go:270] * 
	W0122 21:31:24.947310  312675 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:31:24.947331  312675 out.go:270] * 
	W0122 21:31:24.948119  312675 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 21:31:24.951080  312675 out.go:201] 
	W0122 21:31:24.952375  312675 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:31:24.952433  312675 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0122 21:31:24.952459  312675 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0122 21:31:24.954056  312675 out.go:201] 
	
	
	==> CRI-O <==
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.817144555Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=737be242-4fb4-444c-ad1c-bb9dc4f74614 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.818560338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=adf7d31e-e5d9-43e6-9c7f-7d46143824be name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.819265184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582485819239724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=adf7d31e-e5d9-43e6-9c7f-7d46143824be name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.820210137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=784da730-3e09-483e-bb89-b3918fb1dcc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.820273619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=784da730-3e09-483e-bb89-b3918fb1dcc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.820560061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969c003a4d40daf8bfc840cf9d0dade9cb9b602a5ce3d2095ef09726e5df085c,PodSandboxId:30d62e65bb3de374841a49aeb42ffd6590438f30e91f9011710c3fb355fc3e99,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737582196896404142,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-w2hcz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: cc86fd9b-6798-4e51-85f5-223d956c4f27,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b368144ab955cd90b77e53ded8f838370ea9b82b49e83b2577f891ca3c4681,PodSandboxId:13b86bd3e3be807bb9193a767f0fa746e7d2be480ca89a6723a966952cdbb850,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737581238423451468,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-9ljv8,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 220aea30-9815-497a-97a3-0cbbcf8c5d68,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40c9c6c1163ad5f880d66b4aa75b50464163bd410e338c4c5f98f87d0098396,PodSandboxId:2d36bdbfeafb4d34d4737ef906ed67f98fd79163b74e7dfb27e17e879d28fe75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737581229134598150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb4eba-25e7-4a79-9e42-842137fa7606,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27eb5bb49e16d88ad279e71e17bf689097c2fc1e050544d8c296ce5757defe0b,PodSandboxId:b88dcb73ace9adc1824e32e679adcab0e2d4d365b26e1b0172c9a6a536a70ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581228599139333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-8xm2c,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef56ed64-d524-4967-9f8c-eda485fd9902,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdbeddf37d1a8f6819afbea158bed384528478f6def9b7b871af73ce22a0be15,PodSandboxId:a2b4c268baf8865e1d45d53d295fe798c32d79dd00e525396c684d4d48bc4868,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581228563960165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-28dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a93a6c-6717-4152-a7db-42a8cd6786d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b6caa77475a48a39066b56cb6048a05bd5737d2cd2370ec00cf4bcb59aa21ab,PodSandboxId:f42fb4aedd1e62e6f46069bc0461fa02b90b1dd96325b889d46b9f644d2b3d96,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737581227793361207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48rkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94f180-3afc-4823-8347-ade4af0075d5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6004a1f08948d4913f6440c805c91fd4bb8a915fe07b516e14c9b68fb4c3af3b,PodSandboxId:c19b6d5fa127615206e328f4e5d04ab0b34b316d2c557a54f5a0d01278adf75b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f
590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737581215802599799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85983b28c646aecc02d3e31fed7b352a,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe951b2be56fb6e0648179f45d06ad630f585ab2bce01fc6c25eb1fc444bbe8,PodSandboxId:98327f03fc95a8b6d936e2769c95b8467c024e8323b945ee9723c1e070e1967e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737581215659261896,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8378c171d855ee07ed0ca3deef3884ee,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb3a2130f0ab0521f00314ee1b4e44ab8c47ad48c47fbeb7b47444a2bb9778e,PodSandboxId:4fac7f991cb4dc0158c4980ce1302eb39a711890cd788cb1d0472da5269b5a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4b
f959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737581215654455071,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6719e2575f307cdab84b1d57a27cad45,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e5052d958a02e1b4028022a27e0c682b9d792638f6bcb0bf500aae0f4d8298,PodSandboxId:c12d50271ba131e7bcbed37c57f648060714047be01bddc20d916887897eafe0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737581215529349440,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2bfecbc87b9c6e2c1ccfa6445c5ad3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fa17ee123ba18cd22946e5bfbace5acb392e9cb9e9ef6c9684c2610ee4e5c15,PodSandboxId:92210a53ad628f67928675a9e1abf68370f8e18bf31ba3ac07696dcb70865916,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737580927440776094,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2bfecbc87b9c6e2c1ccfa6445c5ad3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=784da730-3e09-483e-bb89-b3918fb1dcc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.859503261Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e843a47-e44c-43af-8d49-9a73abd74f2b name=/runtime.v1.RuntimeService/Version
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.859578074Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e843a47-e44c-43af-8d49-9a73abd74f2b name=/runtime.v1.RuntimeService/Version
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.861092260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3fc0370-ee40-449e-982a-9b3ce9618736 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.861559665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582485861534770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3fc0370-ee40-449e-982a-9b3ce9618736 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.862540383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd6b4b05-a9f8-44ee-9efb-56c32205068c name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.862602833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd6b4b05-a9f8-44ee-9efb-56c32205068c name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.862912684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969c003a4d40daf8bfc840cf9d0dade9cb9b602a5ce3d2095ef09726e5df085c,PodSandboxId:30d62e65bb3de374841a49aeb42ffd6590438f30e91f9011710c3fb355fc3e99,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737582196896404142,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-w2hcz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: cc86fd9b-6798-4e51-85f5-223d956c4f27,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b368144ab955cd90b77e53ded8f838370ea9b82b49e83b2577f891ca3c4681,PodSandboxId:13b86bd3e3be807bb9193a767f0fa746e7d2be480ca89a6723a966952cdbb850,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737581238423451468,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-9ljv8,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 220aea30-9815-497a-97a3-0cbbcf8c5d68,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40c9c6c1163ad5f880d66b4aa75b50464163bd410e338c4c5f98f87d0098396,PodSandboxId:2d36bdbfeafb4d34d4737ef906ed67f98fd79163b74e7dfb27e17e879d28fe75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737581229134598150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb4eba-25e7-4a79-9e42-842137fa7606,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27eb5bb49e16d88ad279e71e17bf689097c2fc1e050544d8c296ce5757defe0b,PodSandboxId:b88dcb73ace9adc1824e32e679adcab0e2d4d365b26e1b0172c9a6a536a70ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581228599139333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-8xm2c,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef56ed64-d524-4967-9f8c-eda485fd9902,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdbeddf37d1a8f6819afbea158bed384528478f6def9b7b871af73ce22a0be15,PodSandboxId:a2b4c268baf8865e1d45d53d295fe798c32d79dd00e525396c684d4d48bc4868,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581228563960165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-28dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a93a6c-6717-4152-a7db-42a8cd6786d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b6caa77475a48a39066b56cb6048a05bd5737d2cd2370ec00cf4bcb59aa21ab,PodSandboxId:f42fb4aedd1e62e6f46069bc0461fa02b90b1dd96325b889d46b9f644d2b3d96,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737581227793361207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48rkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94f180-3afc-4823-8347-ade4af0075d5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6004a1f08948d4913f6440c805c91fd4bb8a915fe07b516e14c9b68fb4c3af3b,PodSandboxId:c19b6d5fa127615206e328f4e5d04ab0b34b316d2c557a54f5a0d01278adf75b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f
590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737581215802599799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85983b28c646aecc02d3e31fed7b352a,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe951b2be56fb6e0648179f45d06ad630f585ab2bce01fc6c25eb1fc444bbe8,PodSandboxId:98327f03fc95a8b6d936e2769c95b8467c024e8323b945ee9723c1e070e1967e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737581215659261896,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8378c171d855ee07ed0ca3deef3884ee,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb3a2130f0ab0521f00314ee1b4e44ab8c47ad48c47fbeb7b47444a2bb9778e,PodSandboxId:4fac7f991cb4dc0158c4980ce1302eb39a711890cd788cb1d0472da5269b5a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4b
f959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737581215654455071,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6719e2575f307cdab84b1d57a27cad45,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e5052d958a02e1b4028022a27e0c682b9d792638f6bcb0bf500aae0f4d8298,PodSandboxId:c12d50271ba131e7bcbed37c57f648060714047be01bddc20d916887897eafe0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737581215529349440,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2bfecbc87b9c6e2c1ccfa6445c5ad3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fa17ee123ba18cd22946e5bfbace5acb392e9cb9e9ef6c9684c2610ee4e5c15,PodSandboxId:92210a53ad628f67928675a9e1abf68370f8e18bf31ba3ac07696dcb70865916,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737580927440776094,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2bfecbc87b9c6e2c1ccfa6445c5ad3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd6b4b05-a9f8-44ee-9efb-56c32205068c name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.876935547Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18ec15eb-63e7-4b12-8ce4-d5b18224124b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.877290560Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:30d62e65bb3de374841a49aeb42ffd6590438f30e91f9011710c3fb355fc3e99,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-86c6bf9756-w2hcz,Uid:cc86fd9b-6798-4e51-85f5-223d956c4f27,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737581230674893830,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-w2hcz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: cc86fd9b-6798-4e51-85f5-223d956c4f27,k8s-app: dashboard-metrics-scraper,pod-template-hash: 86c6bf9756,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-22T21:27:10.355969570Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:13b86bd3e3be807bb9193a767f0fa746e7d2be480c
a89a6723a966952cdbb850,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-9ljv8,Uid:220aea30-9815-497a-97a3-0cbbcf8c5d68,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737581230628635422,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-9ljv8,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 220aea30-9815-497a-97a3-0cbbcf8c5d68,k8s-app: kubernetes-dashboard,pod-template-hash: 7779f9b69b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-22T21:27:10.319384809Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:167c69ae0de3a1178e5f1434e0480c28a76aefcc4c388ee03e3277740656ebcc,Metadata:&PodSandboxMetadata{Name:metrics-server-f79f97bbb-vsbtm,Uid:81d12c97-93d0-4cfc-ab1f-b9e7b698b275,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737581229240621207,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: metrics-server-f79f97bbb-vsbtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81d12c97-93d0-4cfc-ab1f-b9e7b698b275,k8s-app: metrics-server,pod-template-hash: f79f97bbb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-22T21:27:08.607087400Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2d36bdbfeafb4d34d4737ef906ed67f98fd79163b74e7dfb27e17e879d28fe75,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:eecb4eba-25e7-4a79-9e42-842137fa7606,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737581228471741709,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb4eba-25e7-4a79-9e42-842137fa7606,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\
":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-01-22T21:27:08.154271371Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f42fb4aedd1e62e6f46069bc0461fa02b90b1dd96325b889d46b9f644d2b3d96,Metadata:&PodSandboxMetadata{Name:kube-proxy-48rkl,Uid:fa94f180-3afc-4823-8347-ade4af0075d5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737581227429068919,Labels:map[string]string{controller-revision-h
ash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-48rkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94f180-3afc-4823-8347-ade4af0075d5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-22T21:27:06.220531422Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2b4c268baf8865e1d45d53d295fe798c32d79dd00e525396c684d4d48bc4868,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-28dbf,Uid:d4a93a6c-6717-4152-a7db-42a8cd6786d6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737581227063479485,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-28dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a93a6c-6717-4152-a7db-42a8cd6786d6,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-22T21:27:06.156840291Z,kubernetes.io/config.source: ap
i,},RuntimeHandler:,},&PodSandbox{Id:b88dcb73ace9adc1824e32e679adcab0e2d4d365b26e1b0172c9a6a536a70ce8,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-8xm2c,Uid:ef56ed64-d524-4967-9f8c-eda485fd9902,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737581227047798850,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-8xm2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef56ed64-d524-4967-9f8c-eda485fd9902,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-22T21:27:06.122780635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c19b6d5fa127615206e328f4e5d04ab0b34b316d2c557a54f5a0d01278adf75b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-991469,Uid:85983b28c646aecc02d3e31fed7b352a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737581215374059316,Labels:map[string]string{component: kube-scheduler,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85983b28c646aecc02d3e31fed7b352a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 85983b28c646aecc02d3e31fed7b352a,kubernetes.io/config.seen: 2025-01-22T21:26:54.915170358Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c12d50271ba131e7bcbed37c57f648060714047be01bddc20d916887897eafe0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-991469,Uid:ca2bfecbc87b9c6e2c1ccfa6445c5ad3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1737581215372451417,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2bfecbc87b9c6e2c1ccfa6445c5ad3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-a
piserver.advertise-address.endpoint: 192.168.61.98:8444,kubernetes.io/config.hash: ca2bfecbc87b9c6e2c1ccfa6445c5ad3,kubernetes.io/config.seen: 2025-01-22T21:26:54.915174018Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4fac7f991cb4dc0158c4980ce1302eb39a711890cd788cb1d0472da5269b5a10,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-991469,Uid:6719e2575f307cdab84b1d57a27cad45,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737581215362265871,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6719e2575f307cdab84b1d57a27cad45,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6719e2575f307cdab84b1d57a27cad45,kubernetes.io/config.seen: 2025-01-22T21:26:54.915164142Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9
8327f03fc95a8b6d936e2769c95b8467c024e8323b945ee9723c1e070e1967e,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-991469,Uid:8378c171d855ee07ed0ca3deef3884ee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737581215360815449,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8378c171d855ee07ed0ca3deef3884ee,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.98:2379,kubernetes.io/config.hash: 8378c171d855ee07ed0ca3deef3884ee,kubernetes.io/config.seen: 2025-01-22T21:26:54.915172133Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=18ec15eb-63e7-4b12-8ce4-d5b18224124b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.878212192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=190d78e1-8e09-49eb-95f6-c11b59250149 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.878272825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=190d78e1-8e09-49eb-95f6-c11b59250149 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.878522826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5b368144ab955cd90b77e53ded8f838370ea9b82b49e83b2577f891ca3c4681,PodSandboxId:13b86bd3e3be807bb9193a767f0fa746e7d2be480ca89a6723a966952cdbb850,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737581238423451468,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-9ljv8,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 220aea30-9815-497a-97a3-0cbbcf8c5d68,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40c9c6c1163ad5f880d66b4aa75b50464163bd410e338c4c5f98f87d0098396,PodSandboxId:2d36bdbfeafb4d34d4737ef906ed67f98fd79163b74e7dfb27e17e879d28fe75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737581229134598150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb4eba-25e7-4a79-
9e42-842137fa7606,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27eb5bb49e16d88ad279e71e17bf689097c2fc1e050544d8c296ce5757defe0b,PodSandboxId:b88dcb73ace9adc1824e32e679adcab0e2d4d365b26e1b0172c9a6a536a70ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581228599139333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-8xm2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef56ed64-d524-4967-9f8c-eda485fd9902,},Annotations
:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdbeddf37d1a8f6819afbea158bed384528478f6def9b7b871af73ce22a0be15,PodSandboxId:a2b4c268baf8865e1d45d53d295fe798c32d79dd00e525396c684d4d48bc4868,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581228563960165,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-28dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a93a6c-6717-4152-a7db-42a8cd6786d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b6caa77475a48a39066b56cb6048a05bd5737d2cd2370ec00cf4bcb59aa21ab,PodSandboxId:f42fb4aedd1e62e6f46069bc0461fa02b90b1dd96325b889d46b9f644d2b3d96,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737581227793361207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48rkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94f180-3afc-4823-8347-ade4af0075d5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6004a1f08948d4913f6440c805c91fd4bb8a915fe07b516e14c9b68fb4c3af3b,PodSandboxId:c19b6d5fa127615206e328f4e5d04ab0b34b316d2c557a54f5a0d01278adf75b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737581215802599799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85983b28c646aecc02d3e31fed7b352a,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe951b2be56fb6e0648179f45d06ad630f585ab2bce01fc6c25eb1fc444bbe8,PodSandboxId:98327f03fc95a8b6d936e2769c95b8467c024e8323b945ee9723c1e070e1967e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737581215659261896,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8378c171d855ee07ed0ca3deef3884ee,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb3a2130f0ab0521f00314ee1b4e44ab8c47ad48c47fbeb7b47444a2bb9778e,PodSandboxId:4fac7f991cb4dc0158c4980ce1302eb39a711890cd788cb1d0472da5269b5a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee1
82b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737581215654455071,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6719e2575f307cdab84b1d57a27cad45,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e5052d958a02e1b4028022a27e0c682b9d792638f6bcb0bf500aae0f4d8298,PodSandboxId:c12d50271ba131e7bcbed37c57f648060714047be01bddc20d916887897eafe0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737581215529349440,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2bfecbc87b9c6e2c1ccfa6445c5ad3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=190d78e1-8e09-49eb-95f6-c11b59250149 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.918649164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5237cc1-8da6-4da2-8374-596d61b79a66 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.918784656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5237cc1-8da6-4da2-8374-596d61b79a66 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.921130709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e215ad1-fa10-45dc-a4ec-9778d025d8e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.921616052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582485921588252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e215ad1-fa10-45dc-a4ec-9778d025d8e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.922317934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27c499d1-7550-46a2-a204-b78e6ee04302 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.922405992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27c499d1-7550-46a2-a204-b78e6ee04302 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:48:05 default-k8s-diff-port-991469 crio[727]: time="2025-01-22 21:48:05.922776619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969c003a4d40daf8bfc840cf9d0dade9cb9b602a5ce3d2095ef09726e5df085c,PodSandboxId:30d62e65bb3de374841a49aeb42ffd6590438f30e91f9011710c3fb355fc3e99,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737582196896404142,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-w2hcz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: cc86fd9b-6798-4e51-85f5-223d956c4f27,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5b368144ab955cd90b77e53ded8f838370ea9b82b49e83b2577f891ca3c4681,PodSandboxId:13b86bd3e3be807bb9193a767f0fa746e7d2be480ca89a6723a966952cdbb850,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737581238423451468,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-9ljv8,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 220aea30-9815-497a-97a3-0cbbcf8c5d68,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40c9c6c1163ad5f880d66b4aa75b50464163bd410e338c4c5f98f87d0098396,PodSandboxId:2d36bdbfeafb4d34d4737ef906ed67f98fd79163b74e7dfb27e17e879d28fe75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737581229134598150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb4eba-25e7-4a79-9e42-842137fa7606,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27eb5bb49e16d88ad279e71e17bf689097c2fc1e050544d8c296ce5757defe0b,PodSandboxId:b88dcb73ace9adc1824e32e679adcab0e2d4d365b26e1b0172c9a6a536a70ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581228599139333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-8xm2c,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef56ed64-d524-4967-9f8c-eda485fd9902,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdbeddf37d1a8f6819afbea158bed384528478f6def9b7b871af73ce22a0be15,PodSandboxId:a2b4c268baf8865e1d45d53d295fe798c32d79dd00e525396c684d4d48bc4868,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737581228563960165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-28dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a93a6c-6717-4152-a7db-42a8cd6786d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b6caa77475a48a39066b56cb6048a05bd5737d2cd2370ec00cf4bcb59aa21ab,PodSandboxId:f42fb4aedd1e62e6f46069bc0461fa02b90b1dd96325b889d46b9f644d2b3d96,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737581227793361207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48rkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa94f180-3afc-4823-8347-ade4af0075d5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6004a1f08948d4913f6440c805c91fd4bb8a915fe07b516e14c9b68fb4c3af3b,PodSandboxId:c19b6d5fa127615206e328f4e5d04ab0b34b316d2c557a54f5a0d01278adf75b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f
590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737581215802599799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85983b28c646aecc02d3e31fed7b352a,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe951b2be56fb6e0648179f45d06ad630f585ab2bce01fc6c25eb1fc444bbe8,PodSandboxId:98327f03fc95a8b6d936e2769c95b8467c024e8323b945ee9723c1e070e1967e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737581215659261896,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8378c171d855ee07ed0ca3deef3884ee,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb3a2130f0ab0521f00314ee1b4e44ab8c47ad48c47fbeb7b47444a2bb9778e,PodSandboxId:4fac7f991cb4dc0158c4980ce1302eb39a711890cd788cb1d0472da5269b5a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4b
f959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737581215654455071,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6719e2575f307cdab84b1d57a27cad45,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e5052d958a02e1b4028022a27e0c682b9d792638f6bcb0bf500aae0f4d8298,PodSandboxId:c12d50271ba131e7bcbed37c57f648060714047be01bddc20d916887897eafe0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737581215529349440,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2bfecbc87b9c6e2c1ccfa6445c5ad3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fa17ee123ba18cd22946e5bfbace5acb392e9cb9e9ef6c9684c2610ee4e5c15,PodSandboxId:92210a53ad628f67928675a9e1abf68370f8e18bf31ba3ac07696dcb70865916,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737580927440776094,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-991469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca2bfecbc87b9c6e2c1ccfa6445c5ad3,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27c499d1-7550-46a2-a204-b78e6ee04302 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	969c003a4d40d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 minutes ago       Exited              dashboard-metrics-scraper   8                   30d62e65bb3de       dashboard-metrics-scraper-86c6bf9756-w2hcz
	d5b368144ab95       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   20 minutes ago      Running             kubernetes-dashboard        0                   13b86bd3e3be8       kubernetes-dashboard-7779f9b69b-9ljv8
	b40c9c6c1163a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 minutes ago      Running             storage-provisioner         0                   2d36bdbfeafb4       storage-provisioner
	27eb5bb49e16d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           20 minutes ago      Running             coredns                     0                   b88dcb73ace9a       coredns-668d6bf9bc-8xm2c
	cdbeddf37d1a8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           20 minutes ago      Running             coredns                     0                   a2b4c268baf88       coredns-668d6bf9bc-28dbf
	5b6caa77475a4       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           20 minutes ago      Running             kube-proxy                  0                   f42fb4aedd1e6       kube-proxy-48rkl
	6004a1f08948d       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           21 minutes ago      Running             kube-scheduler              2                   c19b6d5fa1276       kube-scheduler-default-k8s-diff-port-991469
	fbe951b2be56f       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   98327f03fc95a       etcd-default-k8s-diff-port-991469
	5cb3a2130f0ab       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           21 minutes ago      Running             kube-controller-manager     2                   4fac7f991cb4d       kube-controller-manager-default-k8s-diff-port-991469
	d4e5052d958a0       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           21 minutes ago      Running             kube-apiserver              2                   c12d50271ba13       kube-apiserver-default-k8s-diff-port-991469
	2fa17ee123ba1       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           25 minutes ago      Exited              kube-apiserver              1                   92210a53ad628       kube-apiserver-default-k8s-diff-port-991469
	
	
	==> coredns [27eb5bb49e16d88ad279e71e17bf689097c2fc1e050544d8c296ce5757defe0b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [cdbeddf37d1a8f6819afbea158bed384528478f6def9b7b871af73ce22a0be15] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-991469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-991469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4
	                    minikube.k8s.io/name=default-k8s-diff-port-991469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_22T21_27_02_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 Jan 2025 21:26:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-991469
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 Jan 2025 21:47:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 Jan 2025 21:43:01 +0000   Wed, 22 Jan 2025 21:26:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 Jan 2025 21:43:01 +0000   Wed, 22 Jan 2025 21:26:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 Jan 2025 21:43:01 +0000   Wed, 22 Jan 2025 21:26:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 Jan 2025 21:43:01 +0000   Wed, 22 Jan 2025 21:26:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.98
	  Hostname:    default-k8s-diff-port-991469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6dc3c6f59f8b4b7594b78048bf6def71
	  System UUID:                6dc3c6f5-9f8b-4b75-94b7-8048bf6def71
	  Boot ID:                    76e7373e-9fd2-4eb5-9619-7ee1320576fd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-28dbf                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-8xm2c                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-991469                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-991469             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-991469    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-48rkl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-991469             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-vsbtm                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-w2hcz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-9ljv8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-991469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-991469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-991469 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-991469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-991469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-991469 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-991469 event: Registered Node default-k8s-diff-port-991469 in Controller
	
	
	==> dmesg <==
	[  +0.045157] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.233636] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.223785] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.742400] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.789194] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.070663] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.082550] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.229884] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.150032] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +0.382783] systemd-fstab-generator[718]: Ignoring "noauto" option for root device
	[Jan22 21:22] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.059741] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.523049] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +5.676826] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.082001] kauditd_printk_skb: 85 callbacks suppressed
	[Jan22 21:26] systemd-fstab-generator[2710]: Ignoring "noauto" option for root device
	[  +0.061099] kauditd_printk_skb: 9 callbacks suppressed
	[Jan22 21:27] systemd-fstab-generator[3054]: Ignoring "noauto" option for root device
	[  +0.101464] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.046318] systemd-fstab-generator[3165]: Ignoring "noauto" option for root device
	[  +0.149853] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.824008] kauditd_printk_skb: 108 callbacks suppressed
	[ +11.877169] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [fbe951b2be56fb6e0648179f45d06ad630f585ab2bce01fc6c25eb1fc444bbe8] <==
	{"level":"info","ts":"2025-01-22T21:26:57.366610Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-22T21:26:57.367263Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9445fc43dab7239","local-member-id":"505130e22c5c49a1","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-22T21:26:57.367860Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-22T21:26:57.367925Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-22T21:27:18.049140Z","caller":"traceutil/trace.go:171","msg":"trace[2138945320] linearizableReadLoop","detail":"{readStateIndex:542; appliedIndex:541; }","duration":"289.603161ms","start":"2025-01-22T21:27:17.759483Z","end":"2025-01-22T21:27:18.049086Z","steps":["trace[2138945320] 'read index received'  (duration: 288.948242ms)","trace[2138945320] 'applied index is now lower than readState.Index'  (duration: 653.958µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-22T21:27:18.049839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.273842ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:27:18.049918Z","caller":"traceutil/trace.go:171","msg":"trace[1054880635] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:529; }","duration":"290.454365ms","start":"2025-01-22T21:27:17.759439Z","end":"2025-01-22T21:27:18.049894Z","steps":["trace[1054880635] 'agreement among raft nodes before linearized reading'  (duration: 290.182643ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:27:18.049360Z","caller":"traceutil/trace.go:171","msg":"trace[355404618] transaction","detail":"{read_only:false; response_revision:529; number_of_response:1; }","duration":"293.457881ms","start":"2025-01-22T21:27:17.755882Z","end":"2025-01-22T21:27:18.049340Z","steps":["trace[355404618] 'process raft request'  (duration: 292.645159ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:27:20.190629Z","caller":"traceutil/trace.go:171","msg":"trace[644585993] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"120.639743ms","start":"2025-01-22T21:27:20.069972Z","end":"2025-01-22T21:27:20.190612Z","steps":["trace[644585993] 'process raft request'  (duration: 120.132606ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:27:22.318131Z","caller":"traceutil/trace.go:171","msg":"trace[616666622] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"114.126338ms","start":"2025-01-22T21:27:22.203982Z","end":"2025-01-22T21:27:22.318108Z","steps":["trace[616666622] 'process raft request'  (duration: 113.817983ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T21:27:52.862419Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.790355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:27:52.862773Z","caller":"traceutil/trace.go:171","msg":"trace[692377607] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:600; }","duration":"105.233339ms","start":"2025-01-22T21:27:52.757517Z","end":"2025-01-22T21:27:52.862750Z","steps":["trace[692377607] 'range keys from in-memory index tree'  (duration: 104.607566ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:27:53.180521Z","caller":"traceutil/trace.go:171","msg":"trace[1833581707] linearizableReadLoop","detail":"{readStateIndex:621; appliedIndex:620; }","duration":"160.621905ms","start":"2025-01-22T21:27:53.019879Z","end":"2025-01-22T21:27:53.180500Z","steps":["trace[1833581707] 'read index received'  (duration: 160.394219ms)","trace[1833581707] 'applied index is now lower than readState.Index'  (duration: 227.099µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-22T21:27:53.180819Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.917387ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:27:53.180901Z","caller":"traceutil/trace.go:171","msg":"trace[1667004055] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:601; }","duration":"161.013136ms","start":"2025-01-22T21:27:53.019871Z","end":"2025-01-22T21:27:53.180884Z","steps":["trace[1667004055] 'agreement among raft nodes before linearized reading'  (duration: 160.784776ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:27:53.181235Z","caller":"traceutil/trace.go:171","msg":"trace[235301492] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"166.31729ms","start":"2025-01-22T21:27:53.014907Z","end":"2025-01-22T21:27:53.181224Z","steps":["trace[235301492] 'process raft request'  (duration: 165.423855ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:36:57.410596Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":870}
	{"level":"info","ts":"2025-01-22T21:36:57.437943Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":870,"took":"26.874096ms","hash":3104363872,"current-db-size-bytes":2822144,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2822144,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2025-01-22T21:36:57.438087Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3104363872,"revision":870,"compact-revision":-1}
	{"level":"info","ts":"2025-01-22T21:41:57.420590Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1122}
	{"level":"info","ts":"2025-01-22T21:41:57.430928Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1122,"took":"9.164702ms","hash":4018908496,"current-db-size-bytes":2822144,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1785856,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-22T21:41:57.431006Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4018908496,"revision":1122,"compact-revision":870}
	{"level":"info","ts":"2025-01-22T21:46:57.432082Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1373}
	{"level":"info","ts":"2025-01-22T21:46:57.437506Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1373,"took":"4.96261ms","hash":3976679994,"current-db-size-bytes":2822144,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1789952,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-22T21:46:57.437586Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3976679994,"revision":1373,"compact-revision":1122}
	
	
	==> kernel <==
	 21:48:06 up 26 min,  0 users,  load average: 0.08, 0.19, 0.20
	Linux default-k8s-diff-port-991469 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2fa17ee123ba18cd22946e5bfbace5acb392e9cb9e9ef6c9684c2610ee4e5c15] <==
	W0122 21:26:47.557849       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:47.562486       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:47.562631       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:47.593810       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:47.601433       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:47.640550       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:47.663770       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:47.668865       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:47.701102       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:47.713996       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:47.779589       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:47.788544       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:48.064141       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:48.082918       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:48.173342       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:48.289270       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:48.292892       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:48.329491       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:48.634071       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:48.634225       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:49.097497       1 logging.go:55] [core] [Channel #199 SubChannel #200]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:51.300508       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:51.384103       1 logging.go:55] [core] [Channel #199 SubChannel #200]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:52.043055       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0122 21:26:52.061159       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d4e5052d958a02e1b4028022a27e0c682b9d792638f6bcb0bf500aae0f4d8298] <==
	E0122 21:44:59.842213       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0122 21:44:59.843402       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0122 21:46:58.840536       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:46:58.840825       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0122 21:46:59.843330       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:46:59.843532       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0122 21:46:59.843616       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:46:59.843738       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0122 21:46:59.845009       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0122 21:46:59.845097       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0122 21:47:59.846379       1 handler_proxy.go:99] no RequestInfo found in the context
	W0122 21:47:59.846495       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:47:59.846868       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0122 21:47:59.846898       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0122 21:47:59.848153       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0122 21:47:59.848352       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5cb3a2130f0ab0521f00314ee1b4e44ab8c47ad48c47fbeb7b47444a2bb9778e] <==
	I0122 21:43:05.689279       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0122 21:43:17.484347       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="332.023µs"
	I0122 21:43:18.478955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="51.226µs"
	I0122 21:43:18.893120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="79.574µs"
	I0122 21:43:31.900599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="182.15µs"
	E0122 21:43:35.572381       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:43:35.697455       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:44:05.582023       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:44:05.708150       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:44:35.590486       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:44:35.717936       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:45:05.597474       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:45:05.726861       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:45:35.605747       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:45:35.736484       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:46:05.613577       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:46:05.746309       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:46:35.621968       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:46:35.754942       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:47:05.630383       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:47:05.765217       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:47:35.637881       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:47:35.775053       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:48:05.645340       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:48:05.784363       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [5b6caa77475a48a39066b56cb6048a05bd5737d2cd2370ec00cf4bcb59aa21ab] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0122 21:27:09.515457       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0122 21:27:09.538061       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.98"]
	E0122 21:27:09.538244       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0122 21:27:09.701195       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0122 21:27:09.701251       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0122 21:27:09.701289       1 server_linux.go:170] "Using iptables Proxier"
	I0122 21:27:09.707177       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0122 21:27:09.707515       1 server.go:497] "Version info" version="v1.32.1"
	I0122 21:27:09.707545       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0122 21:27:09.709622       1 config.go:199] "Starting service config controller"
	I0122 21:27:09.709724       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0122 21:27:09.710097       1 config.go:105] "Starting endpoint slice config controller"
	I0122 21:27:09.710123       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0122 21:27:09.710179       1 config.go:329] "Starting node config controller"
	I0122 21:27:09.710190       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0122 21:27:09.810788       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0122 21:27:09.810842       1 shared_informer.go:320] Caches are synced for service config
	I0122 21:27:09.813432       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6004a1f08948d4913f6440c805c91fd4bb8a915fe07b516e14c9b68fb4c3af3b] <==
	W0122 21:26:59.764592       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0122 21:26:59.764651       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:59.810086       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0122 21:26:59.810146       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:59.834121       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0122 21:26:59.834237       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:59.900449       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0122 21:26:59.900524       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:59.902063       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0122 21:26:59.902133       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:59.928138       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0122 21:26:59.928229       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:26:59.972015       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0122 21:26:59.972085       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:27:00.031774       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0122 21:27:00.031811       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:27:00.082080       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0122 21:27:00.082201       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:27:00.114567       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0122 21:27:00.114730       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:27:00.125353       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0122 21:27:00.125529       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:27:00.260209       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0122 21:27:00.260359       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0122 21:27:01.955633       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 22 21:47:22 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:22.316337    3061 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582442315429283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:47:22 default-k8s-diff-port-991469 kubelet[3061]: I0122 21:47:22.875362    3061 scope.go:117] "RemoveContainer" containerID="969c003a4d40daf8bfc840cf9d0dade9cb9b602a5ce3d2095ef09726e5df085c"
	Jan 22 21:47:22 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:22.875941    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-w2hcz_kubernetes-dashboard(cc86fd9b-6798-4e51-85f5-223d956c4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-w2hcz" podUID="cc86fd9b-6798-4e51-85f5-223d956c4f27"
	Jan 22 21:47:29 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:29.878989    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vsbtm" podUID="81d12c97-93d0-4cfc-ab1f-b9e7b698b275"
	Jan 22 21:47:32 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:32.320650    3061 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582452319565340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:47:32 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:32.320797    3061 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582452319565340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:47:35 default-k8s-diff-port-991469 kubelet[3061]: I0122 21:47:35.875640    3061 scope.go:117] "RemoveContainer" containerID="969c003a4d40daf8bfc840cf9d0dade9cb9b602a5ce3d2095ef09726e5df085c"
	Jan 22 21:47:35 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:35.876277    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-w2hcz_kubernetes-dashboard(cc86fd9b-6798-4e51-85f5-223d956c4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-w2hcz" podUID="cc86fd9b-6798-4e51-85f5-223d956c4f27"
	Jan 22 21:47:42 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:42.323480    3061 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582462322993478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:47:42 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:42.323852    3061 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582462322993478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:47:43 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:43.876643    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vsbtm" podUID="81d12c97-93d0-4cfc-ab1f-b9e7b698b275"
	Jan 22 21:47:48 default-k8s-diff-port-991469 kubelet[3061]: I0122 21:47:48.875782    3061 scope.go:117] "RemoveContainer" containerID="969c003a4d40daf8bfc840cf9d0dade9cb9b602a5ce3d2095ef09726e5df085c"
	Jan 22 21:47:48 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:48.876008    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-w2hcz_kubernetes-dashboard(cc86fd9b-6798-4e51-85f5-223d956c4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-w2hcz" podUID="cc86fd9b-6798-4e51-85f5-223d956c4f27"
	Jan 22 21:47:52 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:52.326617    3061 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582472325979932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:47:52 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:52.326969    3061 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582472325979932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:47:54 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:54.876217    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vsbtm" podUID="81d12c97-93d0-4cfc-ab1f-b9e7b698b275"
	Jan 22 21:47:59 default-k8s-diff-port-991469 kubelet[3061]: I0122 21:47:59.875499    3061 scope.go:117] "RemoveContainer" containerID="969c003a4d40daf8bfc840cf9d0dade9cb9b602a5ce3d2095ef09726e5df085c"
	Jan 22 21:47:59 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:47:59.875762    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-w2hcz_kubernetes-dashboard(cc86fd9b-6798-4e51-85f5-223d956c4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-w2hcz" podUID="cc86fd9b-6798-4e51-85f5-223d956c4f27"
	Jan 22 21:48:01 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:48:01.923133    3061 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 22 21:48:01 default-k8s-diff-port-991469 kubelet[3061]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 22 21:48:01 default-k8s-diff-port-991469 kubelet[3061]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 21:48:01 default-k8s-diff-port-991469 kubelet[3061]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 21:48:01 default-k8s-diff-port-991469 kubelet[3061]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 21:48:02 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:48:02.329035    3061 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582482328497401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 22 21:48:02 default-k8s-diff-port-991469 kubelet[3061]: E0122 21:48:02.329084    3061 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582482328497401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [d5b368144ab955cd90b77e53ded8f838370ea9b82b49e83b2577f891ca3c4681] <==
	2025/01/22 21:35:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:36:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:36:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:37:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:37:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:38:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:38:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:39:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:39:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:40:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:40:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:41:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:41:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:42:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:42:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:43:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:43:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:44:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:44:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:45:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:45:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:46:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:46:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:47:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:47:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [b40c9c6c1163ad5f880d66b4aa75b50464163bd410e338c4c5f98f87d0098396] <==
	I0122 21:27:09.622381       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0122 21:27:09.639577       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0122 21:27:09.639620       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0122 21:27:09.666274       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0122 21:27:09.666470       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-991469_7f947efc-88d3-4034-ba39-ae266dd92ec7!
	I0122 21:27:09.667463       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3230fc2e-33af-47dc-821d-e23e8c0fdc6f", APIVersion:"v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-991469_7f947efc-88d3-4034-ba39-ae266dd92ec7 became leader
	I0122 21:27:09.766891       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-991469_7f947efc-88d3-4034-ba39-ae266dd92ec7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-991469 -n default-k8s-diff-port-991469
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-991469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-vsbtm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-991469 describe pod metrics-server-f79f97bbb-vsbtm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-991469 describe pod metrics-server-f79f97bbb-vsbtm: exit status 1 (69.780502ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-vsbtm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-991469 describe pod metrics-server-f79f97bbb-vsbtm: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1592.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (511.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-181389 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0122 21:23:07.065254  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:23:13.057217  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:23:14.822356  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:23:26.825582  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:23:48.027099  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:24:04.377209  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:24:34.482418  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:24:34.979524  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:24:47.846718  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:24:48.747042  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:25:02.185658  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:25:09.948784  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:25:11.097657  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:25:15.548085  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:25:27.450407  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:25:38.800617  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:25:50.257292  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-181389 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m29.556688822s)

                                                
                                                
-- stdout --
	* [old-k8s-version-181389] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-181389" primary control-plane node in "old-k8s-version-181389" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-181389" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 21:22:55.453645  312675 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:22:55.453775  312675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:22:55.453784  312675 out.go:358] Setting ErrFile to fd 2...
	I0122 21:22:55.453789  312675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:22:55.454029  312675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:22:55.454689  312675 out.go:352] Setting JSON to false
	I0122 21:22:55.455770  312675 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":14722,"bootTime":1737566254,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:22:55.455908  312675 start.go:139] virtualization: kvm guest
	I0122 21:22:55.458242  312675 out.go:177] * [old-k8s-version-181389] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:22:55.459655  312675 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:22:55.459776  312675 notify.go:220] Checking for updates...
	I0122 21:22:55.461958  312675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:22:55.463407  312675 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:22:55.464686  312675 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:22:55.465961  312675 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:22:55.467287  312675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:22:55.468960  312675 config.go:182] Loaded profile config "old-k8s-version-181389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0122 21:22:55.469405  312675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:22:55.469504  312675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:22:55.485941  312675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0122 21:22:55.486502  312675 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:22:55.487129  312675 main.go:141] libmachine: Using API Version  1
	I0122 21:22:55.487158  312675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:22:55.487559  312675 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:22:55.487750  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:22:55.489592  312675 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0122 21:22:55.490876  312675 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:22:55.491259  312675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:22:55.491334  312675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:22:55.507920  312675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36491
	I0122 21:22:55.508536  312675 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:22:55.509168  312675 main.go:141] libmachine: Using API Version  1
	I0122 21:22:55.509199  312675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:22:55.509526  312675 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:22:55.509731  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:22:55.550470  312675 out.go:177] * Using the kvm2 driver based on existing profile
	I0122 21:22:55.551725  312675 start.go:297] selected driver: kvm2
	I0122 21:22:55.551751  312675 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-181389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-1
81389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:22:55.551891  312675 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:22:55.552583  312675 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:22:55.552659  312675 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:22:55.571300  312675 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:22:55.571723  312675 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:22:55.571764  312675 cni.go:84] Creating CNI manager for ""
	I0122 21:22:55.571812  312675 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:22:55.571852  312675 start.go:340] cluster config:
	{Name:old-k8s-version-181389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-181389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:22:55.571983  312675 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:22:55.573712  312675 out.go:177] * Starting "old-k8s-version-181389" primary control-plane node in "old-k8s-version-181389" cluster
	I0122 21:22:55.575073  312675 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0122 21:22:55.575138  312675 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0122 21:22:55.575150  312675 cache.go:56] Caching tarball of preloaded images
	I0122 21:22:55.575294  312675 preload.go:172] Found /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0122 21:22:55.575308  312675 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0122 21:22:55.575413  312675 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/config.json ...
	I0122 21:22:55.575685  312675 start.go:360] acquireMachinesLock for old-k8s-version-181389: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:22:55.575743  312675 start.go:364] duration metric: took 31.914µs to acquireMachinesLock for "old-k8s-version-181389"
	I0122 21:22:55.575759  312675 start.go:96] Skipping create...Using existing machine configuration
	I0122 21:22:55.575765  312675 fix.go:54] fixHost starting: 
	I0122 21:22:55.576048  312675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:22:55.576083  312675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:22:55.592164  312675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45125
	I0122 21:22:55.592653  312675 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:22:55.593325  312675 main.go:141] libmachine: Using API Version  1
	I0122 21:22:55.593367  312675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:22:55.593749  312675 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:22:55.593999  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:22:55.594214  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetState
	I0122 21:22:55.596375  312675 fix.go:112] recreateIfNeeded on old-k8s-version-181389: state=Stopped err=<nil>
	I0122 21:22:55.596415  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	W0122 21:22:55.596601  312675 fix.go:138] unexpected machine state, will restart: <nil>
	I0122 21:22:55.598793  312675 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-181389" ...
	I0122 21:22:55.600246  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .Start
	I0122 21:22:55.600619  312675 main.go:141] libmachine: (old-k8s-version-181389) starting domain...
	I0122 21:22:55.600647  312675 main.go:141] libmachine: (old-k8s-version-181389) ensuring networks are active...
	I0122 21:22:55.601628  312675 main.go:141] libmachine: (old-k8s-version-181389) Ensuring network default is active
	I0122 21:22:55.602025  312675 main.go:141] libmachine: (old-k8s-version-181389) Ensuring network mk-old-k8s-version-181389 is active
	I0122 21:22:55.602561  312675 main.go:141] libmachine: (old-k8s-version-181389) getting domain XML...
	I0122 21:22:55.603394  312675 main.go:141] libmachine: (old-k8s-version-181389) creating domain...
	I0122 21:22:56.946488  312675 main.go:141] libmachine: (old-k8s-version-181389) waiting for IP...
	I0122 21:22:56.947631  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:22:56.948231  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:22:56.948312  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:22:56.948203  312710 retry.go:31] will retry after 278.781648ms: waiting for domain to come up
	I0122 21:22:57.229044  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:22:57.229730  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:22:57.229758  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:22:57.229671  312710 retry.go:31] will retry after 294.035276ms: waiting for domain to come up
	I0122 21:22:57.525155  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:22:57.525780  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:22:57.525811  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:22:57.525755  312710 retry.go:31] will retry after 397.766392ms: waiting for domain to come up
	I0122 21:22:57.925636  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:22:57.926419  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:22:57.926484  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:22:57.926399  312710 retry.go:31] will retry after 587.409659ms: waiting for domain to come up
	I0122 21:22:58.515154  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:22:58.515786  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:22:58.515816  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:22:58.515736  312710 retry.go:31] will retry after 740.043526ms: waiting for domain to come up
	I0122 21:22:59.257714  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:22:59.258289  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:22:59.258355  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:22:59.258281  312710 retry.go:31] will retry after 801.678709ms: waiting for domain to come up
	I0122 21:23:00.061170  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:00.061633  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:23:00.061658  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:23:00.061596  312710 retry.go:31] will retry after 934.509381ms: waiting for domain to come up
	I0122 21:23:00.997763  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:00.998362  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:23:00.998394  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:23:00.998330  312710 retry.go:31] will retry after 1.191297332s: waiting for domain to come up
	I0122 21:23:02.191311  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:02.191862  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:23:02.191902  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:23:02.191827  312710 retry.go:31] will retry after 1.355376147s: waiting for domain to come up
	I0122 21:23:03.548984  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:03.549608  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:23:03.549632  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:23:03.549560  312710 retry.go:31] will retry after 1.630884765s: waiting for domain to come up
	I0122 21:23:05.181844  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:05.182484  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:23:05.182520  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:23:05.182435  312710 retry.go:31] will retry after 2.458841394s: waiting for domain to come up
	I0122 21:23:07.643661  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:07.644218  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:23:07.644257  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:23:07.644196  312710 retry.go:31] will retry after 2.922076644s: waiting for domain to come up
	I0122 21:23:10.568050  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:10.568697  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | unable to find current IP address of domain old-k8s-version-181389 in network mk-old-k8s-version-181389
	I0122 21:23:10.568730  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | I0122 21:23:10.568654  312710 retry.go:31] will retry after 3.784764454s: waiting for domain to come up
	I0122 21:23:14.356171  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.356739  312675 main.go:141] libmachine: (old-k8s-version-181389) found domain IP: 192.168.72.222
	I0122 21:23:14.356767  312675 main.go:141] libmachine: (old-k8s-version-181389) reserving static IP address...
	I0122 21:23:14.356800  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has current primary IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.357256  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "old-k8s-version-181389", mac: "52:54:00:b5:43:94", ip: "192.168.72.222"} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:14.357286  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | skip adding static IP to network mk-old-k8s-version-181389 - found existing host DHCP lease matching {name: "old-k8s-version-181389", mac: "52:54:00:b5:43:94", ip: "192.168.72.222"}
	I0122 21:23:14.357301  312675 main.go:141] libmachine: (old-k8s-version-181389) reserved static IP address 192.168.72.222 for domain old-k8s-version-181389
	I0122 21:23:14.357317  312675 main.go:141] libmachine: (old-k8s-version-181389) waiting for SSH...
	I0122 21:23:14.357332  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | Getting to WaitForSSH function...
	I0122 21:23:14.359587  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.360049  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:14.360082  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.360264  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | Using SSH client type: external
	I0122 21:23:14.360311  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa (-rw-------)
	I0122 21:23:14.360364  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:23:14.360389  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | About to run SSH command:
	I0122 21:23:14.360431  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | exit 0
	I0122 21:23:14.495143  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | SSH cmd err, output: <nil>: 
	I0122 21:23:14.495534  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetConfigRaw
	I0122 21:23:14.496222  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetIP
	I0122 21:23:14.499254  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.499764  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:14.499814  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.500178  312675 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/config.json ...
	I0122 21:23:14.500488  312675 machine.go:93] provisionDockerMachine start ...
	I0122 21:23:14.500521  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:23:14.500761  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:23:14.504131  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.504567  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:14.504621  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.504788  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:23:14.505034  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:14.505190  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:14.505376  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:23:14.505554  312675 main.go:141] libmachine: Using SSH client type: native
	I0122 21:23:14.505827  312675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0122 21:23:14.505851  312675 main.go:141] libmachine: About to run SSH command:
	hostname
	I0122 21:23:14.623906  312675 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0122 21:23:14.623945  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetMachineName
	I0122 21:23:14.624230  312675 buildroot.go:166] provisioning hostname "old-k8s-version-181389"
	I0122 21:23:14.624258  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetMachineName
	I0122 21:23:14.624438  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:23:14.627961  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.628361  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:14.628404  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.628549  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:23:14.628787  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:14.629040  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:14.629241  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:23:14.629452  312675 main.go:141] libmachine: Using SSH client type: native
	I0122 21:23:14.629799  312675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0122 21:23:14.629824  312675 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-181389 && echo "old-k8s-version-181389" | sudo tee /etc/hostname
	I0122 21:23:14.765426  312675 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-181389
	
	I0122 21:23:14.765475  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:23:14.769115  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.769581  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:14.769622  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.769858  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:23:14.770109  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:14.770315  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:14.770529  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:23:14.770738  312675 main.go:141] libmachine: Using SSH client type: native
	I0122 21:23:14.771000  312675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0122 21:23:14.771029  312675 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-181389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-181389/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-181389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:23:14.898805  312675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:23:14.898840  312675 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:23:14.898869  312675 buildroot.go:174] setting up certificates
	I0122 21:23:14.898884  312675 provision.go:84] configureAuth start
	I0122 21:23:14.898898  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetMachineName
	I0122 21:23:14.899176  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetIP
	I0122 21:23:14.902107  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.902564  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:14.902598  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.902814  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:23:14.905391  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.905835  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:14.905864  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:14.906012  312675 provision.go:143] copyHostCerts
	I0122 21:23:14.906118  312675 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:23:14.906146  312675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:23:14.906266  312675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:23:14.906401  312675 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:23:14.906412  312675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:23:14.906453  312675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:23:14.906544  312675 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:23:14.906554  312675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:23:14.906588  312675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:23:14.906664  312675 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-181389 san=[127.0.0.1 192.168.72.222 localhost minikube old-k8s-version-181389]
	I0122 21:23:15.188870  312675 provision.go:177] copyRemoteCerts
	I0122 21:23:15.188947  312675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:23:15.188979  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:23:15.193609  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.194038  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:15.194069  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.194374  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:23:15.194611  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:15.194838  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:23:15.195007  312675 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa Username:docker}
	I0122 21:23:15.290424  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:23:15.319724  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0122 21:23:15.350826  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0122 21:23:15.382212  312675 provision.go:87] duration metric: took 483.291884ms to configureAuth
	I0122 21:23:15.382252  312675 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:23:15.382490  312675 config.go:182] Loaded profile config "old-k8s-version-181389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0122 21:23:15.382618  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:23:15.385811  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.386178  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:15.386275  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.386525  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:23:15.386777  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:15.386964  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:15.387119  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:23:15.387333  312675 main.go:141] libmachine: Using SSH client type: native
	I0122 21:23:15.387549  312675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0122 21:23:15.387567  312675 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:23:15.657664  312675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:23:15.657706  312675 machine.go:96] duration metric: took 1.157195639s to provisionDockerMachine
	I0122 21:23:15.657723  312675 start.go:293] postStartSetup for "old-k8s-version-181389" (driver="kvm2")
	I0122 21:23:15.657739  312675 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:23:15.657766  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:23:15.658128  312675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:23:15.658168  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:23:15.661696  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.662059  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:15.662094  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.662292  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:23:15.662533  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:15.662736  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:23:15.662929  312675 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa Username:docker}
	I0122 21:23:15.750600  312675 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:23:15.755427  312675 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:23:15.755463  312675 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:23:15.755554  312675 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:23:15.755663  312675 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:23:15.755784  312675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:23:15.767463  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:23:15.797526  312675 start.go:296] duration metric: took 139.784503ms for postStartSetup
	I0122 21:23:15.797578  312675 fix.go:56] duration metric: took 20.221812923s for fixHost
	I0122 21:23:15.797600  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:23:15.800536  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.800963  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:15.801000  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.801224  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:23:15.801484  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:15.801649  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:15.801803  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:23:15.802004  312675 main.go:141] libmachine: Using SSH client type: native
	I0122 21:23:15.802267  312675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.222 22 <nil> <nil>}
	I0122 21:23:15.802287  312675 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:23:15.924400  312675 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737580995.893445511
	
	I0122 21:23:15.924433  312675 fix.go:216] guest clock: 1737580995.893445511
	I0122 21:23:15.924444  312675 fix.go:229] Guest: 2025-01-22 21:23:15.893445511 +0000 UTC Remote: 2025-01-22 21:23:15.797582015 +0000 UTC m=+20.388546011 (delta=95.863496ms)
	I0122 21:23:15.924507  312675 fix.go:200] guest clock delta is within tolerance: 95.863496ms
	I0122 21:23:15.924519  312675 start.go:83] releasing machines lock for "old-k8s-version-181389", held for 20.348764177s
	I0122 21:23:15.924551  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:23:15.924893  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetIP
	I0122 21:23:15.928202  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.928560  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:15.928592  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.928818  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:23:15.929488  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:23:15.929731  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .DriverName
	I0122 21:23:15.929832  312675 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:23:15.929901  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:23:15.929990  312675 ssh_runner.go:195] Run: cat /version.json
	I0122 21:23:15.930017  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHHostname
	I0122 21:23:15.933157  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.933532  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:15.933562  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.933590  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.933765  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:23:15.934005  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:15.934135  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:15.934162  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:23:15.934207  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:15.934384  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHPort
	I0122 21:23:15.934378  312675 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa Username:docker}
	I0122 21:23:15.934558  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHKeyPath
	I0122 21:23:15.934724  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetSSHUsername
	I0122 21:23:15.934845  312675 sshutil.go:53] new ssh client: &{IP:192.168.72.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/old-k8s-version-181389/id_rsa Username:docker}
	I0122 21:23:16.045475  312675 ssh_runner.go:195] Run: systemctl --version
	I0122 21:23:16.053815  312675 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:23:16.205603  312675 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:23:16.213559  312675 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:23:16.213664  312675 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:23:16.232317  312675 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:23:16.232358  312675 start.go:495] detecting cgroup driver to use...
	I0122 21:23:16.232452  312675 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:23:16.250490  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:23:16.266334  312675 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:23:16.266413  312675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:23:16.282733  312675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:23:16.298691  312675 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:23:16.427692  312675 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:23:16.600856  312675 docker.go:233] disabling docker service ...
	I0122 21:23:16.600975  312675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:23:16.617665  312675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:23:16.635492  312675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:23:16.793938  312675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:23:16.930291  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:23:16.948643  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:23:16.972218  312675 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0122 21:23:16.972297  312675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:23:16.986437  312675 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:23:16.986516  312675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:23:17.000535  312675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:23:17.015144  312675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:23:17.029292  312675 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:23:17.044458  312675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:23:17.056164  312675 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:23:17.056228  312675 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:23:17.072827  312675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:23:17.085258  312675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:23:17.226493  312675 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:23:17.339338  312675 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:23:17.339430  312675 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:23:17.349367  312675 start.go:563] Will wait 60s for crictl version
	I0122 21:23:17.349437  312675 ssh_runner.go:195] Run: which crictl
	I0122 21:23:17.354293  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:23:17.397909  312675 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:23:17.398002  312675 ssh_runner.go:195] Run: crio --version
	I0122 21:23:17.432258  312675 ssh_runner.go:195] Run: crio --version
	I0122 21:23:17.469988  312675 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0122 21:23:17.471353  312675 main.go:141] libmachine: (old-k8s-version-181389) Calling .GetIP
	I0122 21:23:17.474497  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:17.474952  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:94", ip: ""} in network mk-old-k8s-version-181389: {Iface:virbr4 ExpiryTime:2025-01-22 22:23:08 +0000 UTC Type:0 Mac:52:54:00:b5:43:94 Iaid: IPaddr:192.168.72.222 Prefix:24 Hostname:old-k8s-version-181389 Clientid:01:52:54:00:b5:43:94}
	I0122 21:23:17.475029  312675 main.go:141] libmachine: (old-k8s-version-181389) DBG | domain old-k8s-version-181389 has defined IP address 192.168.72.222 and MAC address 52:54:00:b5:43:94 in network mk-old-k8s-version-181389
	I0122 21:23:17.475198  312675 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0122 21:23:17.480268  312675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:23:17.494774  312675 kubeadm.go:883] updating cluster {Name:old-k8s-version-181389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-181389 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:23:17.494983  312675 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0122 21:23:17.495043  312675 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:23:17.550490  312675 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0122 21:23:17.550575  312675 ssh_runner.go:195] Run: which lz4
	I0122 21:23:17.555423  312675 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:23:17.560726  312675 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:23:17.560773  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0122 21:23:19.536268  312675 crio.go:462] duration metric: took 1.980884168s to copy over tarball
	I0122 21:23:19.536349  312675 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:23:22.969493  312675 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.433109572s)
	I0122 21:23:22.969531  312675 crio.go:469] duration metric: took 3.433227553s to extract the tarball
	I0122 21:23:22.969541  312675 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:23:23.018951  312675 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:23:23.064317  312675 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0122 21:23:23.064346  312675 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0122 21:23:23.064439  312675 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:23:23.064504  312675 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0122 21:23:23.064511  312675 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:23:23.064526  312675 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0122 21:23:23.064433  312675 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:23:23.064448  312675 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:23:23.064565  312675 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:23:23.064880  312675 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0122 21:23:23.066268  312675 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0122 21:23:23.066290  312675 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:23:23.066299  312675 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0122 21:23:23.066299  312675 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:23:23.066275  312675 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:23:23.066267  312675 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:23:23.066304  312675 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:23:23.066268  312675 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0122 21:23:23.216929  312675 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0122 21:23:23.219665  312675 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0122 21:23:23.226493  312675 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:23:23.231733  312675 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:23:23.244502  312675 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:23:23.249974  312675 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:23:23.254577  312675 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0122 21:23:23.321208  312675 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0122 21:23:23.321266  312675 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0122 21:23:23.321324  312675 ssh_runner.go:195] Run: which crictl
	I0122 21:23:23.392191  312675 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0122 21:23:23.392251  312675 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0122 21:23:23.392306  312675 ssh_runner.go:195] Run: which crictl
	I0122 21:23:23.427899  312675 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0122 21:23:23.427964  312675 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:23:23.427899  312675 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0122 21:23:23.428015  312675 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:23:23.428021  312675 ssh_runner.go:195] Run: which crictl
	I0122 21:23:23.428062  312675 ssh_runner.go:195] Run: which crictl
	I0122 21:23:23.451860  312675 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0122 21:23:23.451890  312675 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0122 21:23:23.451921  312675 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:23:23.451929  312675 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0122 21:23:23.451979  312675 ssh_runner.go:195] Run: which crictl
	I0122 21:23:23.452029  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0122 21:23:23.451981  312675 ssh_runner.go:195] Run: which crictl
	I0122 21:23:23.452152  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0122 21:23:23.452403  312675 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0122 21:23:23.452447  312675 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:23:23.452492  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:23:23.452503  312675 ssh_runner.go:195] Run: which crictl
	I0122 21:23:23.452508  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:23:23.566984  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:23:23.567038  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0122 21:23:23.566984  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:23:23.567116  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:23:23.567140  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0122 21:23:23.567199  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:23:23.567203  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0122 21:23:23.735363  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0122 21:23:23.735471  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:23:23.735533  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0122 21:23:23.735576  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0122 21:23:23.735646  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:23:23.764261  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0122 21:23:23.764273  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0122 21:23:23.932755  312675 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0122 21:23:23.932767  312675 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0122 21:23:23.932883  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0122 21:23:23.932900  312675 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0122 21:23:23.933030  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0122 21:23:23.944271  312675 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0122 21:23:23.944377  312675 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0122 21:23:24.015684  312675 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:23:24.031577  312675 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0122 21:23:24.031591  312675 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0122 21:23:24.034809  312675 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0122 21:23:24.178602  312675 cache_images.go:92] duration metric: took 1.11423503s to LoadCachedImages
	W0122 21:23:24.178722  312675 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20288-247142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0122 21:23:24.178740  312675 kubeadm.go:934] updating node { 192.168.72.222 8443 v1.20.0 crio true true} ...
	I0122 21:23:24.178876  312675 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-181389 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-181389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:23:24.178987  312675 ssh_runner.go:195] Run: crio config
	I0122 21:23:24.241313  312675 cni.go:84] Creating CNI manager for ""
	I0122 21:23:24.241344  312675 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:23:24.241355  312675 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 21:23:24.241376  312675 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.222 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-181389 NodeName:old-k8s-version-181389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0122 21:23:24.241559  312675 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-181389"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:23:24.241646  312675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0122 21:23:24.253964  312675 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:23:24.254052  312675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:23:24.265767  312675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0122 21:23:24.286281  312675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:23:24.307129  312675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0122 21:23:24.331561  312675 ssh_runner.go:195] Run: grep 192.168.72.222	control-plane.minikube.internal$ /etc/hosts
	I0122 21:23:24.336366  312675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:23:24.353430  312675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:23:24.488195  312675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:23:24.510522  312675 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389 for IP: 192.168.72.222
	I0122 21:23:24.510556  312675 certs.go:194] generating shared ca certs ...
	I0122 21:23:24.510580  312675 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:23:24.510836  312675 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:23:24.510911  312675 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:23:24.510931  312675 certs.go:256] generating profile certs ...
	I0122 21:23:24.511076  312675 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/client.key
	I0122 21:23:24.511141  312675 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.key.d562c0b4
	I0122 21:23:24.511178  312675 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/proxy-client.key
	I0122 21:23:24.511358  312675 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:23:24.511391  312675 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:23:24.511402  312675 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:23:24.511423  312675 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:23:24.511447  312675 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:23:24.511473  312675 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:23:24.511514  312675 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:23:24.512317  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:23:24.557361  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:23:24.590057  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:23:24.625775  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:23:24.671840  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0122 21:23:24.711286  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0122 21:23:24.767841  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:23:24.813069  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/old-k8s-version-181389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0122 21:23:24.843206  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:23:24.873688  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:23:24.906740  312675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:23:24.937624  312675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:23:24.958465  312675 ssh_runner.go:195] Run: openssl version
	I0122 21:23:24.965493  312675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:23:24.979614  312675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:23:24.985408  312675 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:23:24.985502  312675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:23:24.992757  312675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:23:25.007687  312675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:23:25.021678  312675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:23:25.027665  312675 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:23:25.027759  312675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:23:25.034858  312675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:23:25.049166  312675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:23:25.063371  312675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:23:25.069247  312675 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:23:25.069320  312675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:23:25.076257  312675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:23:25.090797  312675 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:23:25.096786  312675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 21:23:25.104324  312675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 21:23:25.112116  312675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 21:23:25.119658  312675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 21:23:25.127714  312675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 21:23:25.135520  312675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 21:23:25.143066  312675 kubeadm.go:392] StartCluster: {Name:old-k8s-version-181389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-181389 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.222 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:23:25.143204  312675 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:23:25.143295  312675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:23:25.187741  312675 cri.go:89] found id: ""
	I0122 21:23:25.187825  312675 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:23:25.200549  312675 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0122 21:23:25.200586  312675 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0122 21:23:25.200646  312675 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0122 21:23:25.212400  312675 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0122 21:23:25.213921  312675 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-181389" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:23:25.214707  312675 kubeconfig.go:62] /home/jenkins/minikube-integration/20288-247142/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-181389" cluster setting kubeconfig missing "old-k8s-version-181389" context setting]
	I0122 21:23:25.215685  312675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:23:25.217813  312675 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0122 21:23:25.229872  312675 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.222
	I0122 21:23:25.229930  312675 kubeadm.go:1160] stopping kube-system containers ...
	I0122 21:23:25.229975  312675 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0122 21:23:25.230042  312675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:23:25.279253  312675 cri.go:89] found id: ""
	I0122 21:23:25.279373  312675 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0122 21:23:25.299285  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:23:25.311865  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:23:25.311891  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:23:25.311944  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:23:25.323521  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:23:25.323631  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:23:25.337108  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:23:25.350032  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:23:25.350113  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:23:25.362357  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:23:25.374782  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:23:25.374855  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:23:25.388259  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:23:25.400974  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:23:25.401049  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:23:25.414171  312675 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:23:25.427673  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:23:25.582630  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:23:26.299437  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:23:26.559120  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:23:26.671900  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:23:26.786019  312675 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:23:26.786159  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:27.287279  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:27.787194  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:28.286265  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:28.786836  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:29.286362  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:29.786403  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:30.287036  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:30.786887  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:31.286349  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:31.786597  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:32.287183  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:32.786383  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:33.286262  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:33.787148  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:34.286337  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:34.786368  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:35.286785  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:35.786767  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:36.287127  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:36.787213  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:37.286360  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:37.786669  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:38.286530  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:38.787058  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:39.286352  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:39.786641  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:40.286380  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:40.787066  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:41.286366  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:41.786382  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:42.286249  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:42.786873  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:43.286306  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:43.786397  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:44.287070  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:44.786261  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:45.287236  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:45.786292  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:46.286405  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:46.786386  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:47.286333  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:47.786338  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:48.286351  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:48.786495  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:49.286331  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:49.786747  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:50.287043  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:50.786372  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:51.286426  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:51.787115  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:52.286358  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:52.786645  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:53.286245  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:53.786319  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:54.286243  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:54.786209  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:55.286481  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:55.786894  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:56.286343  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:56.786980  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:57.286362  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:57.786885  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:58.286341  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:58.787173  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:59.286332  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:23:59.786368  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:00.287021  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:00.787047  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:01.286349  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:01.786679  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:02.286740  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:02.786931  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:03.286226  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:03.786939  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:04.286648  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:04.786584  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:05.286790  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:05.786931  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:06.286472  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:06.786362  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:07.286335  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:07.787381  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:08.286333  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:08.786935  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:09.286342  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:09.786976  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:10.286934  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:10.786662  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:11.287235  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:11.786865  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:12.286391  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:12.787217  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:13.286881  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:13.786941  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:14.287133  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:14.787076  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:15.286892  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:15.787139  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:16.286340  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:16.786417  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:17.286340  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:17.787075  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:18.287025  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:18.786805  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:19.286851  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:19.787034  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:20.286383  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:20.786558  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:21.286412  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:21.786328  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:22.287016  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:22.787160  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:23.287119  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:23.786249  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:24.286376  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:24.786988  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:25.287037  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:25.786549  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:26.286376  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:26.786363  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:24:26.786502  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:24:26.845913  312675 cri.go:89] found id: ""
	I0122 21:24:26.845975  312675 logs.go:282] 0 containers: []
	W0122 21:24:26.845988  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:24:26.845997  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:24:26.846068  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:24:26.886065  312675 cri.go:89] found id: ""
	I0122 21:24:26.886099  312675 logs.go:282] 0 containers: []
	W0122 21:24:26.886107  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:24:26.886115  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:24:26.886172  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:24:26.925551  312675 cri.go:89] found id: ""
	I0122 21:24:26.925585  312675 logs.go:282] 0 containers: []
	W0122 21:24:26.925597  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:24:26.925606  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:24:26.925678  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:24:26.966848  312675 cri.go:89] found id: ""
	I0122 21:24:26.966884  312675 logs.go:282] 0 containers: []
	W0122 21:24:26.966893  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:24:26.966899  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:24:26.966956  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:24:27.009368  312675 cri.go:89] found id: ""
	I0122 21:24:27.009397  312675 logs.go:282] 0 containers: []
	W0122 21:24:27.009406  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:24:27.009412  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:24:27.009472  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:24:27.056317  312675 cri.go:89] found id: ""
	I0122 21:24:27.056355  312675 logs.go:282] 0 containers: []
	W0122 21:24:27.056367  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:24:27.056376  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:24:27.056435  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:24:27.098833  312675 cri.go:89] found id: ""
	I0122 21:24:27.098866  312675 logs.go:282] 0 containers: []
	W0122 21:24:27.098875  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:24:27.098882  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:24:27.098948  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:24:27.141604  312675 cri.go:89] found id: ""
	I0122 21:24:27.141650  312675 logs.go:282] 0 containers: []
	W0122 21:24:27.141660  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:24:27.141685  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:24:27.141703  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:24:27.198956  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:24:27.199005  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:24:27.214472  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:24:27.214509  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:24:27.367494  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:24:27.367518  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:24:27.367532  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:24:27.445602  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:24:27.445649  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:24:29.994757  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:30.029376  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:24:30.029453  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:24:30.070998  312675 cri.go:89] found id: ""
	I0122 21:24:30.071029  312675 logs.go:282] 0 containers: []
	W0122 21:24:30.071040  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:24:30.071048  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:24:30.071115  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:24:30.112317  312675 cri.go:89] found id: ""
	I0122 21:24:30.112353  312675 logs.go:282] 0 containers: []
	W0122 21:24:30.112365  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:24:30.112372  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:24:30.112441  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:24:30.158911  312675 cri.go:89] found id: ""
	I0122 21:24:30.158942  312675 logs.go:282] 0 containers: []
	W0122 21:24:30.158953  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:24:30.158961  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:24:30.159032  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:24:30.199836  312675 cri.go:89] found id: ""
	I0122 21:24:30.199867  312675 logs.go:282] 0 containers: []
	W0122 21:24:30.199875  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:24:30.199881  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:24:30.199942  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:24:30.241989  312675 cri.go:89] found id: ""
	I0122 21:24:30.242028  312675 logs.go:282] 0 containers: []
	W0122 21:24:30.242040  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:24:30.242048  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:24:30.242112  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:24:30.283246  312675 cri.go:89] found id: ""
	I0122 21:24:30.283282  312675 logs.go:282] 0 containers: []
	W0122 21:24:30.283294  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:24:30.283303  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:24:30.283369  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:24:30.327878  312675 cri.go:89] found id: ""
	I0122 21:24:30.327914  312675 logs.go:282] 0 containers: []
	W0122 21:24:30.327926  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:24:30.327934  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:24:30.328018  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:24:30.368087  312675 cri.go:89] found id: ""
	I0122 21:24:30.368126  312675 logs.go:282] 0 containers: []
	W0122 21:24:30.368145  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:24:30.368161  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:24:30.368178  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:24:30.424620  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:24:30.424669  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:24:30.440272  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:24:30.440316  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:24:30.524352  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:24:30.524374  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:24:30.524389  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:24:30.603683  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:24:30.603731  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:24:33.159811  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:33.177088  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:24:33.177178  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:24:33.220642  312675 cri.go:89] found id: ""
	I0122 21:24:33.220684  312675 logs.go:282] 0 containers: []
	W0122 21:24:33.220696  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:24:33.220706  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:24:33.220779  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:24:33.265939  312675 cri.go:89] found id: ""
	I0122 21:24:33.265975  312675 logs.go:282] 0 containers: []
	W0122 21:24:33.265988  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:24:33.265996  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:24:33.266069  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:24:33.306480  312675 cri.go:89] found id: ""
	I0122 21:24:33.306519  312675 logs.go:282] 0 containers: []
	W0122 21:24:33.306532  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:24:33.306540  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:24:33.306613  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:24:33.347394  312675 cri.go:89] found id: ""
	I0122 21:24:33.347432  312675 logs.go:282] 0 containers: []
	W0122 21:24:33.347441  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:24:33.347450  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:24:33.347522  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:24:33.389087  312675 cri.go:89] found id: ""
	I0122 21:24:33.389127  312675 logs.go:282] 0 containers: []
	W0122 21:24:33.389139  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:24:33.389148  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:24:33.389223  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:24:33.441860  312675 cri.go:89] found id: ""
	I0122 21:24:33.441897  312675 logs.go:282] 0 containers: []
	W0122 21:24:33.441910  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:24:33.441929  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:24:33.441999  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:24:33.486690  312675 cri.go:89] found id: ""
	I0122 21:24:33.486724  312675 logs.go:282] 0 containers: []
	W0122 21:24:33.486735  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:24:33.486743  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:24:33.486817  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:24:33.528380  312675 cri.go:89] found id: ""
	I0122 21:24:33.528417  312675 logs.go:282] 0 containers: []
	W0122 21:24:33.528430  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:24:33.528444  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:24:33.528469  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:24:33.581206  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:24:33.581256  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:24:33.598757  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:24:33.598802  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:24:33.684718  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:24:33.684767  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:24:33.684786  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:24:33.773044  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:24:33.773095  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:24:36.319432  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:36.334082  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:24:36.334152  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:24:36.374687  312675 cri.go:89] found id: ""
	I0122 21:24:36.374724  312675 logs.go:282] 0 containers: []
	W0122 21:24:36.374736  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:24:36.374744  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:24:36.374822  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:24:36.419241  312675 cri.go:89] found id: ""
	I0122 21:24:36.419273  312675 logs.go:282] 0 containers: []
	W0122 21:24:36.419281  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:24:36.419288  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:24:36.419360  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:24:36.465994  312675 cri.go:89] found id: ""
	I0122 21:24:36.466039  312675 logs.go:282] 0 containers: []
	W0122 21:24:36.466054  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:24:36.466062  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:24:36.466128  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:24:36.527557  312675 cri.go:89] found id: ""
	I0122 21:24:36.527605  312675 logs.go:282] 0 containers: []
	W0122 21:24:36.527617  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:24:36.527625  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:24:36.527694  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:24:36.572750  312675 cri.go:89] found id: ""
	I0122 21:24:36.572788  312675 logs.go:282] 0 containers: []
	W0122 21:24:36.572800  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:24:36.572808  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:24:36.572873  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:24:36.626041  312675 cri.go:89] found id: ""
	I0122 21:24:36.626081  312675 logs.go:282] 0 containers: []
	W0122 21:24:36.626092  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:24:36.626101  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:24:36.626172  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:24:36.668277  312675 cri.go:89] found id: ""
	I0122 21:24:36.668311  312675 logs.go:282] 0 containers: []
	W0122 21:24:36.668322  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:24:36.668331  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:24:36.668399  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:24:36.706591  312675 cri.go:89] found id: ""
	I0122 21:24:36.706629  312675 logs.go:282] 0 containers: []
	W0122 21:24:36.706641  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:24:36.706654  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:24:36.706674  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:24:36.767037  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:24:36.767100  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:24:36.782626  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:24:36.782660  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:24:36.867678  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:24:36.867702  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:24:36.867719  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:24:36.950391  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:24:36.950440  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:24:39.506323  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:39.523046  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:24:39.523120  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:24:39.569539  312675 cri.go:89] found id: ""
	I0122 21:24:39.569579  312675 logs.go:282] 0 containers: []
	W0122 21:24:39.569591  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:24:39.569600  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:24:39.569670  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:24:39.616834  312675 cri.go:89] found id: ""
	I0122 21:24:39.616878  312675 logs.go:282] 0 containers: []
	W0122 21:24:39.616890  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:24:39.616900  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:24:39.616971  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:24:39.658252  312675 cri.go:89] found id: ""
	I0122 21:24:39.658291  312675 logs.go:282] 0 containers: []
	W0122 21:24:39.658302  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:24:39.658311  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:24:39.658376  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:24:39.696968  312675 cri.go:89] found id: ""
	I0122 21:24:39.697025  312675 logs.go:282] 0 containers: []
	W0122 21:24:39.697037  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:24:39.697047  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:24:39.697116  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:24:39.739178  312675 cri.go:89] found id: ""
	I0122 21:24:39.739208  312675 logs.go:282] 0 containers: []
	W0122 21:24:39.739217  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:24:39.739227  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:24:39.739282  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:24:39.780361  312675 cri.go:89] found id: ""
	I0122 21:24:39.780392  312675 logs.go:282] 0 containers: []
	W0122 21:24:39.780404  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:24:39.780413  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:24:39.780482  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:24:39.821206  312675 cri.go:89] found id: ""
	I0122 21:24:39.821246  312675 logs.go:282] 0 containers: []
	W0122 21:24:39.821259  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:24:39.821267  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:24:39.821341  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:24:39.862202  312675 cri.go:89] found id: ""
	I0122 21:24:39.862240  312675 logs.go:282] 0 containers: []
	W0122 21:24:39.862252  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:24:39.862267  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:24:39.862284  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:24:39.912169  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:24:39.912228  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:24:39.965525  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:24:39.965583  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:24:39.981322  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:24:39.981359  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:24:40.068419  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:24:40.068445  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:24:40.068461  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:24:42.651105  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:42.666361  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:24:42.666429  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:24:42.708819  312675 cri.go:89] found id: ""
	I0122 21:24:42.708858  312675 logs.go:282] 0 containers: []
	W0122 21:24:42.708887  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:24:42.708894  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:24:42.708966  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:24:42.753453  312675 cri.go:89] found id: ""
	I0122 21:24:42.753484  312675 logs.go:282] 0 containers: []
	W0122 21:24:42.753493  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:24:42.753499  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:24:42.753557  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:24:42.794029  312675 cri.go:89] found id: ""
	I0122 21:24:42.794062  312675 logs.go:282] 0 containers: []
	W0122 21:24:42.794073  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:24:42.794081  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:24:42.794152  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:24:42.834479  312675 cri.go:89] found id: ""
	I0122 21:24:42.834507  312675 logs.go:282] 0 containers: []
	W0122 21:24:42.834516  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:24:42.834522  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:24:42.834583  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:24:42.875649  312675 cri.go:89] found id: ""
	I0122 21:24:42.875692  312675 logs.go:282] 0 containers: []
	W0122 21:24:42.875705  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:24:42.875714  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:24:42.875779  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:24:42.923247  312675 cri.go:89] found id: ""
	I0122 21:24:42.923285  312675 logs.go:282] 0 containers: []
	W0122 21:24:42.923297  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:24:42.923305  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:24:42.923360  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:24:42.965920  312675 cri.go:89] found id: ""
	I0122 21:24:42.965954  312675 logs.go:282] 0 containers: []
	W0122 21:24:42.965965  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:24:42.965973  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:24:42.966039  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:24:43.011287  312675 cri.go:89] found id: ""
	I0122 21:24:43.011330  312675 logs.go:282] 0 containers: []
	W0122 21:24:43.011342  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:24:43.011356  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:24:43.011371  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:24:43.090962  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:24:43.091008  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:24:43.137294  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:24:43.137326  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:24:43.193740  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:24:43.193787  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:24:43.209560  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:24:43.209595  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:24:43.291805  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:24:45.792624  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:45.809086  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:24:45.809162  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:24:45.854085  312675 cri.go:89] found id: ""
	I0122 21:24:45.854118  312675 logs.go:282] 0 containers: []
	W0122 21:24:45.854130  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:24:45.854140  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:24:45.854231  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:24:45.894622  312675 cri.go:89] found id: ""
	I0122 21:24:45.894654  312675 logs.go:282] 0 containers: []
	W0122 21:24:45.894666  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:24:45.894674  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:24:45.894748  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:24:45.936973  312675 cri.go:89] found id: ""
	I0122 21:24:45.937005  312675 logs.go:282] 0 containers: []
	W0122 21:24:45.937016  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:24:45.937024  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:24:45.937098  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:24:45.977206  312675 cri.go:89] found id: ""
	I0122 21:24:45.977247  312675 logs.go:282] 0 containers: []
	W0122 21:24:45.977259  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:24:45.977271  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:24:45.977336  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:24:46.019961  312675 cri.go:89] found id: ""
	I0122 21:24:46.020000  312675 logs.go:282] 0 containers: []
	W0122 21:24:46.020009  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:24:46.020016  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:24:46.020070  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:24:46.060830  312675 cri.go:89] found id: ""
	I0122 21:24:46.060870  312675 logs.go:282] 0 containers: []
	W0122 21:24:46.060883  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:24:46.060893  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:24:46.060968  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:24:46.101829  312675 cri.go:89] found id: ""
	I0122 21:24:46.101857  312675 logs.go:282] 0 containers: []
	W0122 21:24:46.101866  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:24:46.101873  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:24:46.101967  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:24:46.141120  312675 cri.go:89] found id: ""
	I0122 21:24:46.141149  312675 logs.go:282] 0 containers: []
	W0122 21:24:46.141165  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:24:46.141178  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:24:46.141195  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:24:46.200323  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:24:46.200368  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:24:46.216738  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:24:46.216776  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:24:46.294335  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:24:46.294369  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:24:46.294395  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:24:46.371114  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:24:46.371161  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:24:48.923420  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:48.937566  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:24:48.937646  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:24:48.977420  312675 cri.go:89] found id: ""
	I0122 21:24:48.977491  312675 logs.go:282] 0 containers: []
	W0122 21:24:48.977517  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:24:48.977532  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:24:48.977612  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:24:49.016334  312675 cri.go:89] found id: ""
	I0122 21:24:49.016367  312675 logs.go:282] 0 containers: []
	W0122 21:24:49.016375  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:24:49.016382  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:24:49.016453  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:24:49.059001  312675 cri.go:89] found id: ""
	I0122 21:24:49.059038  312675 logs.go:282] 0 containers: []
	W0122 21:24:49.059049  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:24:49.059057  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:24:49.059135  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:24:49.105044  312675 cri.go:89] found id: ""
	I0122 21:24:49.105082  312675 logs.go:282] 0 containers: []
	W0122 21:24:49.105091  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:24:49.105100  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:24:49.105162  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:24:49.146490  312675 cri.go:89] found id: ""
	I0122 21:24:49.146529  312675 logs.go:282] 0 containers: []
	W0122 21:24:49.146539  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:24:49.146547  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:24:49.146620  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:24:49.184917  312675 cri.go:89] found id: ""
	I0122 21:24:49.184945  312675 logs.go:282] 0 containers: []
	W0122 21:24:49.184960  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:24:49.184967  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:24:49.185034  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:24:49.226882  312675 cri.go:89] found id: ""
	I0122 21:24:49.226917  312675 logs.go:282] 0 containers: []
	W0122 21:24:49.226929  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:24:49.226938  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:24:49.227026  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:24:49.267497  312675 cri.go:89] found id: ""
	I0122 21:24:49.267526  312675 logs.go:282] 0 containers: []
	W0122 21:24:49.267534  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:24:49.267545  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:24:49.267559  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:24:49.310835  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:24:49.310881  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:24:49.362845  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:24:49.362920  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:24:49.379815  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:24:49.379861  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:24:49.463736  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:24:49.463766  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:24:49.463781  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:24:52.047369  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:52.061920  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:24:52.061995  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:24:52.104293  312675 cri.go:89] found id: ""
	I0122 21:24:52.104332  312675 logs.go:282] 0 containers: []
	W0122 21:24:52.104344  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:24:52.104356  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:24:52.104434  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:24:52.144851  312675 cri.go:89] found id: ""
	I0122 21:24:52.144888  312675 logs.go:282] 0 containers: []
	W0122 21:24:52.144901  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:24:52.144909  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:24:52.144974  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:24:52.184771  312675 cri.go:89] found id: ""
	I0122 21:24:52.184811  312675 logs.go:282] 0 containers: []
	W0122 21:24:52.184821  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:24:52.184828  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:24:52.184886  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:24:52.227292  312675 cri.go:89] found id: ""
	I0122 21:24:52.227329  312675 logs.go:282] 0 containers: []
	W0122 21:24:52.227340  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:24:52.227350  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:24:52.227426  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:24:52.272945  312675 cri.go:89] found id: ""
	I0122 21:24:52.272987  312675 logs.go:282] 0 containers: []
	W0122 21:24:52.272997  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:24:52.273009  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:24:52.273077  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:24:52.315876  312675 cri.go:89] found id: ""
	I0122 21:24:52.315907  312675 logs.go:282] 0 containers: []
	W0122 21:24:52.315915  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:24:52.315922  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:24:52.315983  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:24:52.360150  312675 cri.go:89] found id: ""
	I0122 21:24:52.360189  312675 logs.go:282] 0 containers: []
	W0122 21:24:52.360208  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:24:52.360218  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:24:52.360288  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:24:52.402451  312675 cri.go:89] found id: ""
	I0122 21:24:52.402483  312675 logs.go:282] 0 containers: []
	W0122 21:24:52.402491  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:24:52.402503  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:24:52.402520  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:24:52.460197  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:24:52.460241  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:24:52.478226  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:24:52.478264  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:24:52.556834  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:24:52.556890  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:24:52.556909  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:24:52.637337  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:24:52.637399  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:24:55.189439  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:55.203995  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:24:55.204080  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:24:55.242058  312675 cri.go:89] found id: ""
	I0122 21:24:55.242091  312675 logs.go:282] 0 containers: []
	W0122 21:24:55.242102  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:24:55.242110  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:24:55.242205  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:24:55.279643  312675 cri.go:89] found id: ""
	I0122 21:24:55.279683  312675 logs.go:282] 0 containers: []
	W0122 21:24:55.279695  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:24:55.279704  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:24:55.279776  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:24:55.323153  312675 cri.go:89] found id: ""
	I0122 21:24:55.323197  312675 logs.go:282] 0 containers: []
	W0122 21:24:55.323208  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:24:55.323217  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:24:55.323287  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:24:55.365417  312675 cri.go:89] found id: ""
	I0122 21:24:55.365450  312675 logs.go:282] 0 containers: []
	W0122 21:24:55.365460  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:24:55.365469  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:24:55.365531  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:24:55.403658  312675 cri.go:89] found id: ""
	I0122 21:24:55.403689  312675 logs.go:282] 0 containers: []
	W0122 21:24:55.403697  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:24:55.403705  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:24:55.403768  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:24:55.442156  312675 cri.go:89] found id: ""
	I0122 21:24:55.442205  312675 logs.go:282] 0 containers: []
	W0122 21:24:55.442229  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:24:55.442240  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:24:55.442310  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:24:55.481885  312675 cri.go:89] found id: ""
	I0122 21:24:55.481921  312675 logs.go:282] 0 containers: []
	W0122 21:24:55.481933  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:24:55.481949  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:24:55.482023  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:24:55.519214  312675 cri.go:89] found id: ""
	I0122 21:24:55.519250  312675 logs.go:282] 0 containers: []
	W0122 21:24:55.519259  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:24:55.519270  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:24:55.519284  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:24:55.596548  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:24:55.596596  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:24:55.642687  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:24:55.642730  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:24:55.696515  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:24:55.696561  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:24:55.712213  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:24:55.712248  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:24:55.795203  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:24:58.296923  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:24:58.312410  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:24:58.312515  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:24:58.354203  312675 cri.go:89] found id: ""
	I0122 21:24:58.354243  312675 logs.go:282] 0 containers: []
	W0122 21:24:58.354256  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:24:58.354266  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:24:58.354342  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:24:58.395413  312675 cri.go:89] found id: ""
	I0122 21:24:58.395446  312675 logs.go:282] 0 containers: []
	W0122 21:24:58.395458  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:24:58.395467  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:24:58.395536  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:24:58.436195  312675 cri.go:89] found id: ""
	I0122 21:24:58.436236  312675 logs.go:282] 0 containers: []
	W0122 21:24:58.436248  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:24:58.436257  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:24:58.436328  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:24:58.478394  312675 cri.go:89] found id: ""
	I0122 21:24:58.478431  312675 logs.go:282] 0 containers: []
	W0122 21:24:58.478444  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:24:58.478453  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:24:58.478530  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:24:58.525824  312675 cri.go:89] found id: ""
	I0122 21:24:58.525861  312675 logs.go:282] 0 containers: []
	W0122 21:24:58.525874  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:24:58.525882  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:24:58.525955  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:24:58.568686  312675 cri.go:89] found id: ""
	I0122 21:24:58.568722  312675 logs.go:282] 0 containers: []
	W0122 21:24:58.568734  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:24:58.568744  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:24:58.568818  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:24:58.608831  312675 cri.go:89] found id: ""
	I0122 21:24:58.608866  312675 logs.go:282] 0 containers: []
	W0122 21:24:58.608875  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:24:58.608882  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:24:58.608946  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:24:58.650019  312675 cri.go:89] found id: ""
	I0122 21:24:58.650055  312675 logs.go:282] 0 containers: []
	W0122 21:24:58.650066  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:24:58.650081  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:24:58.650100  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:24:58.701273  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:24:58.701318  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:24:58.717136  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:24:58.717180  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:24:58.796953  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:24:58.796996  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:24:58.797013  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:24:58.877623  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:24:58.877688  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:01.428301  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:01.443605  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:01.443690  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:01.486639  312675 cri.go:89] found id: ""
	I0122 21:25:01.486675  312675 logs.go:282] 0 containers: []
	W0122 21:25:01.486685  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:01.486692  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:01.486761  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:01.534047  312675 cri.go:89] found id: ""
	I0122 21:25:01.534085  312675 logs.go:282] 0 containers: []
	W0122 21:25:01.534098  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:01.534107  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:01.534204  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:01.574368  312675 cri.go:89] found id: ""
	I0122 21:25:01.574403  312675 logs.go:282] 0 containers: []
	W0122 21:25:01.574416  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:01.574425  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:01.574495  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:01.614383  312675 cri.go:89] found id: ""
	I0122 21:25:01.614423  312675 logs.go:282] 0 containers: []
	W0122 21:25:01.614435  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:01.614442  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:01.614498  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:01.655201  312675 cri.go:89] found id: ""
	I0122 21:25:01.655242  312675 logs.go:282] 0 containers: []
	W0122 21:25:01.655255  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:01.655264  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:01.655333  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:01.695389  312675 cri.go:89] found id: ""
	I0122 21:25:01.695421  312675 logs.go:282] 0 containers: []
	W0122 21:25:01.695431  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:01.695441  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:01.695510  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:01.739226  312675 cri.go:89] found id: ""
	I0122 21:25:01.739262  312675 logs.go:282] 0 containers: []
	W0122 21:25:01.739274  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:01.739282  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:01.739354  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:01.783680  312675 cri.go:89] found id: ""
	I0122 21:25:01.783720  312675 logs.go:282] 0 containers: []
	W0122 21:25:01.783733  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:01.783747  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:01.783770  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:01.838223  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:01.838290  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:01.854590  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:01.854632  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:01.941273  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:01.941303  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:01.941321  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:02.026161  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:02.026235  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:04.571891  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:04.586660  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:04.586728  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:04.628977  312675 cri.go:89] found id: ""
	I0122 21:25:04.629007  312675 logs.go:282] 0 containers: []
	W0122 21:25:04.629017  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:04.629025  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:04.629091  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:04.669590  312675 cri.go:89] found id: ""
	I0122 21:25:04.669623  312675 logs.go:282] 0 containers: []
	W0122 21:25:04.669636  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:04.669644  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:04.669714  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:04.712237  312675 cri.go:89] found id: ""
	I0122 21:25:04.712270  312675 logs.go:282] 0 containers: []
	W0122 21:25:04.712280  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:04.712289  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:04.712357  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:04.755622  312675 cri.go:89] found id: ""
	I0122 21:25:04.755656  312675 logs.go:282] 0 containers: []
	W0122 21:25:04.755665  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:04.755671  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:04.755742  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:04.812555  312675 cri.go:89] found id: ""
	I0122 21:25:04.812594  312675 logs.go:282] 0 containers: []
	W0122 21:25:04.812606  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:04.812615  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:04.812685  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:04.855931  312675 cri.go:89] found id: ""
	I0122 21:25:04.855967  312675 logs.go:282] 0 containers: []
	W0122 21:25:04.855979  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:04.855988  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:04.856054  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:04.899446  312675 cri.go:89] found id: ""
	I0122 21:25:04.899481  312675 logs.go:282] 0 containers: []
	W0122 21:25:04.899492  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:04.899509  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:04.899587  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:04.940337  312675 cri.go:89] found id: ""
	I0122 21:25:04.940366  312675 logs.go:282] 0 containers: []
	W0122 21:25:04.940374  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:04.940385  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:04.940399  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:04.987354  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:04.987387  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:05.045261  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:05.045337  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:05.061312  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:05.061351  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:05.142767  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:05.142797  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:05.142814  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:07.728861  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:07.743582  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:07.743671  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:07.783338  312675 cri.go:89] found id: ""
	I0122 21:25:07.783367  312675 logs.go:282] 0 containers: []
	W0122 21:25:07.783374  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:07.783380  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:07.783444  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:07.823188  312675 cri.go:89] found id: ""
	I0122 21:25:07.823231  312675 logs.go:282] 0 containers: []
	W0122 21:25:07.823255  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:07.823265  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:07.823335  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:07.863240  312675 cri.go:89] found id: ""
	I0122 21:25:07.863280  312675 logs.go:282] 0 containers: []
	W0122 21:25:07.863292  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:07.863301  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:07.863375  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:07.909894  312675 cri.go:89] found id: ""
	I0122 21:25:07.909936  312675 logs.go:282] 0 containers: []
	W0122 21:25:07.909949  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:07.909959  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:07.910036  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:07.951328  312675 cri.go:89] found id: ""
	I0122 21:25:07.951368  312675 logs.go:282] 0 containers: []
	W0122 21:25:07.951384  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:07.951394  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:07.951463  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:07.994954  312675 cri.go:89] found id: ""
	I0122 21:25:07.994984  312675 logs.go:282] 0 containers: []
	W0122 21:25:07.994993  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:07.995000  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:07.995061  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:08.035096  312675 cri.go:89] found id: ""
	I0122 21:25:08.035126  312675 logs.go:282] 0 containers: []
	W0122 21:25:08.035138  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:08.035149  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:08.035216  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:08.077353  312675 cri.go:89] found id: ""
	I0122 21:25:08.077392  312675 logs.go:282] 0 containers: []
	W0122 21:25:08.077403  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:08.077417  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:08.077435  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:08.131978  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:08.132023  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:08.149547  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:08.149577  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:08.234639  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:08.234670  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:08.234687  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:08.320809  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:08.320864  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:10.870011  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:10.884908  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:10.884995  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:10.928608  312675 cri.go:89] found id: ""
	I0122 21:25:10.928642  312675 logs.go:282] 0 containers: []
	W0122 21:25:10.928654  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:10.928662  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:10.928737  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:10.968712  312675 cri.go:89] found id: ""
	I0122 21:25:10.968743  312675 logs.go:282] 0 containers: []
	W0122 21:25:10.968759  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:10.968767  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:10.968835  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:11.012040  312675 cri.go:89] found id: ""
	I0122 21:25:11.012074  312675 logs.go:282] 0 containers: []
	W0122 21:25:11.012082  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:11.012089  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:11.012154  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:11.054177  312675 cri.go:89] found id: ""
	I0122 21:25:11.054235  312675 logs.go:282] 0 containers: []
	W0122 21:25:11.054247  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:11.054255  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:11.054314  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:11.098617  312675 cri.go:89] found id: ""
	I0122 21:25:11.098699  312675 logs.go:282] 0 containers: []
	W0122 21:25:11.098719  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:11.098729  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:11.098796  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:11.138678  312675 cri.go:89] found id: ""
	I0122 21:25:11.138711  312675 logs.go:282] 0 containers: []
	W0122 21:25:11.138721  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:11.138727  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:11.138793  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:11.184808  312675 cri.go:89] found id: ""
	I0122 21:25:11.184841  312675 logs.go:282] 0 containers: []
	W0122 21:25:11.184852  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:11.184862  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:11.184931  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:11.237848  312675 cri.go:89] found id: ""
	I0122 21:25:11.237887  312675 logs.go:282] 0 containers: []
	W0122 21:25:11.237900  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:11.237914  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:11.237931  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:11.294311  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:11.294356  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:11.316189  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:11.316221  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:11.407266  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:11.407295  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:11.407312  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:11.492387  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:11.492434  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:14.042699  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:14.059328  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:14.059418  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:14.104007  312675 cri.go:89] found id: ""
	I0122 21:25:14.104035  312675 logs.go:282] 0 containers: []
	W0122 21:25:14.104044  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:14.104050  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:14.104104  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:14.145549  312675 cri.go:89] found id: ""
	I0122 21:25:14.145586  312675 logs.go:282] 0 containers: []
	W0122 21:25:14.145597  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:14.145604  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:14.145660  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:14.191407  312675 cri.go:89] found id: ""
	I0122 21:25:14.191444  312675 logs.go:282] 0 containers: []
	W0122 21:25:14.191456  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:14.191462  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:14.191518  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:14.232617  312675 cri.go:89] found id: ""
	I0122 21:25:14.232657  312675 logs.go:282] 0 containers: []
	W0122 21:25:14.232669  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:14.232678  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:14.232749  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:14.275319  312675 cri.go:89] found id: ""
	I0122 21:25:14.275358  312675 logs.go:282] 0 containers: []
	W0122 21:25:14.275370  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:14.275378  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:14.275449  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:14.317970  312675 cri.go:89] found id: ""
	I0122 21:25:14.318001  312675 logs.go:282] 0 containers: []
	W0122 21:25:14.318009  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:14.318016  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:14.318077  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:14.364735  312675 cri.go:89] found id: ""
	I0122 21:25:14.364764  312675 logs.go:282] 0 containers: []
	W0122 21:25:14.364773  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:14.364782  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:14.364850  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:14.407667  312675 cri.go:89] found id: ""
	I0122 21:25:14.407700  312675 logs.go:282] 0 containers: []
	W0122 21:25:14.407712  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:14.407757  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:14.407781  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:14.424512  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:14.424546  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:14.498437  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:14.498467  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:14.498488  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:14.578789  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:14.578835  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:14.634563  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:14.634592  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:17.188938  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:17.226561  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:17.226645  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:17.287042  312675 cri.go:89] found id: ""
	I0122 21:25:17.287087  312675 logs.go:282] 0 containers: []
	W0122 21:25:17.287100  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:17.287112  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:17.287201  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:17.327308  312675 cri.go:89] found id: ""
	I0122 21:25:17.327349  312675 logs.go:282] 0 containers: []
	W0122 21:25:17.327361  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:17.327369  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:17.327441  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:17.370652  312675 cri.go:89] found id: ""
	I0122 21:25:17.370685  312675 logs.go:282] 0 containers: []
	W0122 21:25:17.370695  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:17.370704  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:17.370762  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:17.409740  312675 cri.go:89] found id: ""
	I0122 21:25:17.409778  312675 logs.go:282] 0 containers: []
	W0122 21:25:17.409790  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:17.409798  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:17.409871  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:17.454065  312675 cri.go:89] found id: ""
	I0122 21:25:17.454104  312675 logs.go:282] 0 containers: []
	W0122 21:25:17.454116  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:17.454124  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:17.454214  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:17.496014  312675 cri.go:89] found id: ""
	I0122 21:25:17.496048  312675 logs.go:282] 0 containers: []
	W0122 21:25:17.496058  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:17.496064  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:17.496129  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:17.536474  312675 cri.go:89] found id: ""
	I0122 21:25:17.536523  312675 logs.go:282] 0 containers: []
	W0122 21:25:17.536537  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:17.536550  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:17.536623  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:17.577148  312675 cri.go:89] found id: ""
	I0122 21:25:17.577192  312675 logs.go:282] 0 containers: []
	W0122 21:25:17.577204  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:17.577218  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:17.577236  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:17.592476  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:17.592517  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:17.675448  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:17.675485  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:17.675504  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:17.754820  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:17.754873  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:17.801753  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:17.801784  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:20.356300  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:20.370663  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:20.370734  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:20.410771  312675 cri.go:89] found id: ""
	I0122 21:25:20.410803  312675 logs.go:282] 0 containers: []
	W0122 21:25:20.410814  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:20.410823  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:20.410893  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:20.452690  312675 cri.go:89] found id: ""
	I0122 21:25:20.452722  312675 logs.go:282] 0 containers: []
	W0122 21:25:20.452730  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:20.452736  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:20.452792  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:20.490282  312675 cri.go:89] found id: ""
	I0122 21:25:20.490323  312675 logs.go:282] 0 containers: []
	W0122 21:25:20.490336  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:20.490345  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:20.490418  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:20.533301  312675 cri.go:89] found id: ""
	I0122 21:25:20.533337  312675 logs.go:282] 0 containers: []
	W0122 21:25:20.533349  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:20.533359  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:20.533443  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:20.573668  312675 cri.go:89] found id: ""
	I0122 21:25:20.573703  312675 logs.go:282] 0 containers: []
	W0122 21:25:20.573715  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:20.573724  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:20.573803  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:20.614840  312675 cri.go:89] found id: ""
	I0122 21:25:20.614871  312675 logs.go:282] 0 containers: []
	W0122 21:25:20.614880  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:20.614886  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:20.614944  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:20.658940  312675 cri.go:89] found id: ""
	I0122 21:25:20.658968  312675 logs.go:282] 0 containers: []
	W0122 21:25:20.658976  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:20.658982  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:20.659044  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:20.698625  312675 cri.go:89] found id: ""
	I0122 21:25:20.698660  312675 logs.go:282] 0 containers: []
	W0122 21:25:20.698671  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:20.698689  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:20.698705  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:20.713492  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:20.713525  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:20.790759  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:20.790794  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:20.790810  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:20.876048  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:20.876096  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:20.920994  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:20.921027  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:23.478228  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:23.494390  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:23.494464  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:23.537302  312675 cri.go:89] found id: ""
	I0122 21:25:23.537345  312675 logs.go:282] 0 containers: []
	W0122 21:25:23.537357  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:23.537366  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:23.537443  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:23.578240  312675 cri.go:89] found id: ""
	I0122 21:25:23.578277  312675 logs.go:282] 0 containers: []
	W0122 21:25:23.578287  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:23.578294  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:23.578358  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:23.622650  312675 cri.go:89] found id: ""
	I0122 21:25:23.622682  312675 logs.go:282] 0 containers: []
	W0122 21:25:23.622694  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:23.622701  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:23.622774  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:23.665202  312675 cri.go:89] found id: ""
	I0122 21:25:23.665238  312675 logs.go:282] 0 containers: []
	W0122 21:25:23.665248  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:23.665255  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:23.665310  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:23.706033  312675 cri.go:89] found id: ""
	I0122 21:25:23.706062  312675 logs.go:282] 0 containers: []
	W0122 21:25:23.706071  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:23.706078  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:23.706146  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:23.745749  312675 cri.go:89] found id: ""
	I0122 21:25:23.745777  312675 logs.go:282] 0 containers: []
	W0122 21:25:23.745786  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:23.745793  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:23.745863  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:23.784285  312675 cri.go:89] found id: ""
	I0122 21:25:23.784322  312675 logs.go:282] 0 containers: []
	W0122 21:25:23.784331  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:23.784337  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:23.784404  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:23.824155  312675 cri.go:89] found id: ""
	I0122 21:25:23.824187  312675 logs.go:282] 0 containers: []
	W0122 21:25:23.824200  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:23.824211  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:23.824227  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:23.876584  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:23.876652  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:23.893570  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:23.893613  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:23.977457  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:23.977485  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:23.977508  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:24.059465  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:24.059513  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:26.607551  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:26.622650  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:26.622733  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:26.665548  312675 cri.go:89] found id: ""
	I0122 21:25:26.665579  312675 logs.go:282] 0 containers: []
	W0122 21:25:26.665590  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:26.665598  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:26.665672  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:26.705832  312675 cri.go:89] found id: ""
	I0122 21:25:26.705884  312675 logs.go:282] 0 containers: []
	W0122 21:25:26.705896  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:26.705904  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:26.705993  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:26.745287  312675 cri.go:89] found id: ""
	I0122 21:25:26.745330  312675 logs.go:282] 0 containers: []
	W0122 21:25:26.745342  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:26.745351  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:26.745427  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:26.784709  312675 cri.go:89] found id: ""
	I0122 21:25:26.784738  312675 logs.go:282] 0 containers: []
	W0122 21:25:26.784747  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:26.784755  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:26.784823  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:26.822651  312675 cri.go:89] found id: ""
	I0122 21:25:26.822689  312675 logs.go:282] 0 containers: []
	W0122 21:25:26.822701  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:26.822717  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:26.822794  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:26.863770  312675 cri.go:89] found id: ""
	I0122 21:25:26.863810  312675 logs.go:282] 0 containers: []
	W0122 21:25:26.863822  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:26.863830  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:26.863901  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:26.908598  312675 cri.go:89] found id: ""
	I0122 21:25:26.908638  312675 logs.go:282] 0 containers: []
	W0122 21:25:26.908650  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:26.908659  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:26.908731  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:26.949353  312675 cri.go:89] found id: ""
	I0122 21:25:26.949389  312675 logs.go:282] 0 containers: []
	W0122 21:25:26.949398  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:26.949410  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:26.949428  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:27.004119  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:27.004187  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:27.020341  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:27.020387  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:27.106169  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:27.106217  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:27.106237  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:27.186715  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:27.186763  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:29.735893  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:29.750479  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:29.750563  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:29.793019  312675 cri.go:89] found id: ""
	I0122 21:25:29.793068  312675 logs.go:282] 0 containers: []
	W0122 21:25:29.793077  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:29.793086  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:29.793155  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:29.833335  312675 cri.go:89] found id: ""
	I0122 21:25:29.833372  312675 logs.go:282] 0 containers: []
	W0122 21:25:29.833387  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:29.833396  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:29.833470  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:29.872616  312675 cri.go:89] found id: ""
	I0122 21:25:29.872647  312675 logs.go:282] 0 containers: []
	W0122 21:25:29.872656  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:29.872663  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:29.872719  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:29.913463  312675 cri.go:89] found id: ""
	I0122 21:25:29.913493  312675 logs.go:282] 0 containers: []
	W0122 21:25:29.913506  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:29.913515  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:29.913572  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:29.953195  312675 cri.go:89] found id: ""
	I0122 21:25:29.953231  312675 logs.go:282] 0 containers: []
	W0122 21:25:29.953244  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:29.953262  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:29.953338  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:29.993996  312675 cri.go:89] found id: ""
	I0122 21:25:29.994025  312675 logs.go:282] 0 containers: []
	W0122 21:25:29.994034  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:29.994040  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:29.994108  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:30.034640  312675 cri.go:89] found id: ""
	I0122 21:25:30.034679  312675 logs.go:282] 0 containers: []
	W0122 21:25:30.034692  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:30.034700  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:30.034760  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:30.074588  312675 cri.go:89] found id: ""
	I0122 21:25:30.074622  312675 logs.go:282] 0 containers: []
	W0122 21:25:30.074631  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:30.074643  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:30.074661  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:30.119851  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:30.119897  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:30.173141  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:30.173188  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:30.189984  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:30.190039  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:30.273438  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:30.273468  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:30.273483  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:32.855023  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:32.870334  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:32.870433  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:32.913696  312675 cri.go:89] found id: ""
	I0122 21:25:32.913727  312675 logs.go:282] 0 containers: []
	W0122 21:25:32.913735  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:32.913741  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:32.913794  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:32.957780  312675 cri.go:89] found id: ""
	I0122 21:25:32.957823  312675 logs.go:282] 0 containers: []
	W0122 21:25:32.957835  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:32.957844  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:32.957934  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:32.997737  312675 cri.go:89] found id: ""
	I0122 21:25:32.997772  312675 logs.go:282] 0 containers: []
	W0122 21:25:32.997781  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:32.997787  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:32.997841  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:33.038325  312675 cri.go:89] found id: ""
	I0122 21:25:33.038357  312675 logs.go:282] 0 containers: []
	W0122 21:25:33.038367  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:33.038374  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:33.038446  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:33.083056  312675 cri.go:89] found id: ""
	I0122 21:25:33.083099  312675 logs.go:282] 0 containers: []
	W0122 21:25:33.083111  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:33.083120  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:33.083187  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:33.125877  312675 cri.go:89] found id: ""
	I0122 21:25:33.125916  312675 logs.go:282] 0 containers: []
	W0122 21:25:33.125927  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:33.125937  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:33.126008  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:33.168219  312675 cri.go:89] found id: ""
	I0122 21:25:33.168259  312675 logs.go:282] 0 containers: []
	W0122 21:25:33.168273  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:33.168283  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:33.168352  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:33.208274  312675 cri.go:89] found id: ""
	I0122 21:25:33.208309  312675 logs.go:282] 0 containers: []
	W0122 21:25:33.208321  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:33.208336  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:33.208354  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:33.223988  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:33.224034  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:33.305463  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:33.305491  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:33.305510  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:33.388095  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:33.388145  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:33.433128  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:33.433179  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:35.985813  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:36.001368  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:36.001440  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:36.052770  312675 cri.go:89] found id: ""
	I0122 21:25:36.052805  312675 logs.go:282] 0 containers: []
	W0122 21:25:36.052817  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:36.052824  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:36.052892  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:36.092493  312675 cri.go:89] found id: ""
	I0122 21:25:36.092534  312675 logs.go:282] 0 containers: []
	W0122 21:25:36.092546  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:36.092556  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:36.092626  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:36.132086  312675 cri.go:89] found id: ""
	I0122 21:25:36.132118  312675 logs.go:282] 0 containers: []
	W0122 21:25:36.132132  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:36.132140  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:36.132246  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:36.174812  312675 cri.go:89] found id: ""
	I0122 21:25:36.174845  312675 logs.go:282] 0 containers: []
	W0122 21:25:36.174857  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:36.174865  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:36.174938  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:36.217854  312675 cri.go:89] found id: ""
	I0122 21:25:36.217894  312675 logs.go:282] 0 containers: []
	W0122 21:25:36.217908  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:36.217917  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:36.218005  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:36.263598  312675 cri.go:89] found id: ""
	I0122 21:25:36.263630  312675 logs.go:282] 0 containers: []
	W0122 21:25:36.263642  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:36.263651  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:36.263724  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:36.302974  312675 cri.go:89] found id: ""
	I0122 21:25:36.303005  312675 logs.go:282] 0 containers: []
	W0122 21:25:36.303015  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:36.303024  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:36.303095  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:36.342579  312675 cri.go:89] found id: ""
	I0122 21:25:36.342609  312675 logs.go:282] 0 containers: []
	W0122 21:25:36.342618  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:36.342628  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:36.342640  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:36.396824  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:36.396882  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:36.413129  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:36.413168  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:36.490988  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:36.491019  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:36.491038  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:36.572882  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:36.572930  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:39.129470  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:39.143788  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:39.143857  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:39.183871  312675 cri.go:89] found id: ""
	I0122 21:25:39.183899  312675 logs.go:282] 0 containers: []
	W0122 21:25:39.183908  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:39.183915  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:39.183972  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:39.223793  312675 cri.go:89] found id: ""
	I0122 21:25:39.223827  312675 logs.go:282] 0 containers: []
	W0122 21:25:39.223839  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:39.223848  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:39.223919  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:39.263098  312675 cri.go:89] found id: ""
	I0122 21:25:39.263133  312675 logs.go:282] 0 containers: []
	W0122 21:25:39.263146  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:39.263155  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:39.263229  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:39.305792  312675 cri.go:89] found id: ""
	I0122 21:25:39.305821  312675 logs.go:282] 0 containers: []
	W0122 21:25:39.305830  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:39.305837  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:39.305892  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:39.346583  312675 cri.go:89] found id: ""
	I0122 21:25:39.346612  312675 logs.go:282] 0 containers: []
	W0122 21:25:39.346620  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:39.346627  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:39.346684  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:39.386178  312675 cri.go:89] found id: ""
	I0122 21:25:39.386237  312675 logs.go:282] 0 containers: []
	W0122 21:25:39.386250  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:39.386259  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:39.386335  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:39.427769  312675 cri.go:89] found id: ""
	I0122 21:25:39.427796  312675 logs.go:282] 0 containers: []
	W0122 21:25:39.427805  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:39.427812  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:39.427867  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:39.467672  312675 cri.go:89] found id: ""
	I0122 21:25:39.467712  312675 logs.go:282] 0 containers: []
	W0122 21:25:39.467726  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:39.467739  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:39.467752  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:39.524308  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:39.524372  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:39.547254  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:39.547291  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:39.636037  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:39.636063  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:39.636082  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:39.719743  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:39.719796  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:42.266433  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:42.284891  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:42.284974  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:42.324952  312675 cri.go:89] found id: ""
	I0122 21:25:42.324992  312675 logs.go:282] 0 containers: []
	W0122 21:25:42.325006  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:42.325015  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:42.325079  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:42.363939  312675 cri.go:89] found id: ""
	I0122 21:25:42.363972  312675 logs.go:282] 0 containers: []
	W0122 21:25:42.363980  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:42.363986  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:42.364042  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:42.409887  312675 cri.go:89] found id: ""
	I0122 21:25:42.409924  312675 logs.go:282] 0 containers: []
	W0122 21:25:42.409936  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:42.409945  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:42.410022  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:42.457066  312675 cri.go:89] found id: ""
	I0122 21:25:42.457097  312675 logs.go:282] 0 containers: []
	W0122 21:25:42.457108  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:42.457124  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:42.457189  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:42.498973  312675 cri.go:89] found id: ""
	I0122 21:25:42.499009  312675 logs.go:282] 0 containers: []
	W0122 21:25:42.499021  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:42.499029  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:42.499105  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:42.550747  312675 cri.go:89] found id: ""
	I0122 21:25:42.550802  312675 logs.go:282] 0 containers: []
	W0122 21:25:42.550814  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:42.550824  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:42.550918  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:42.592772  312675 cri.go:89] found id: ""
	I0122 21:25:42.592813  312675 logs.go:282] 0 containers: []
	W0122 21:25:42.592825  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:42.592835  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:42.592906  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:42.633684  312675 cri.go:89] found id: ""
	I0122 21:25:42.633721  312675 logs.go:282] 0 containers: []
	W0122 21:25:42.633734  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:42.633747  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:42.633768  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:42.692155  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:42.692203  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:42.708250  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:42.708300  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:42.796164  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:42.796198  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:42.796219  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:42.882589  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:42.882648  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:45.428789  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:45.445055  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:45.445147  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:45.488352  312675 cri.go:89] found id: ""
	I0122 21:25:45.488385  312675 logs.go:282] 0 containers: []
	W0122 21:25:45.488394  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:45.488400  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:45.488455  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:45.529062  312675 cri.go:89] found id: ""
	I0122 21:25:45.529101  312675 logs.go:282] 0 containers: []
	W0122 21:25:45.529113  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:45.529123  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:45.529198  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:45.573443  312675 cri.go:89] found id: ""
	I0122 21:25:45.573472  312675 logs.go:282] 0 containers: []
	W0122 21:25:45.573480  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:45.573487  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:45.573560  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:45.619005  312675 cri.go:89] found id: ""
	I0122 21:25:45.619047  312675 logs.go:282] 0 containers: []
	W0122 21:25:45.619061  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:45.619070  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:45.619137  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:45.660517  312675 cri.go:89] found id: ""
	I0122 21:25:45.660550  312675 logs.go:282] 0 containers: []
	W0122 21:25:45.660563  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:45.660571  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:45.660644  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:45.701945  312675 cri.go:89] found id: ""
	I0122 21:25:45.701986  312675 logs.go:282] 0 containers: []
	W0122 21:25:45.701999  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:45.702007  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:45.702080  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:45.745373  312675 cri.go:89] found id: ""
	I0122 21:25:45.745412  312675 logs.go:282] 0 containers: []
	W0122 21:25:45.745426  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:45.745435  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:45.745509  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:45.784258  312675 cri.go:89] found id: ""
	I0122 21:25:45.784303  312675 logs.go:282] 0 containers: []
	W0122 21:25:45.784318  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:45.784333  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:45.784351  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:45.837440  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:45.837491  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:45.853245  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:45.853292  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:45.937224  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:45.937257  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:45.937277  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:46.025499  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:46.025550  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:48.593312  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:48.609013  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:48.609090  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:48.652570  312675 cri.go:89] found id: ""
	I0122 21:25:48.652601  312675 logs.go:282] 0 containers: []
	W0122 21:25:48.652611  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:48.652620  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:48.652684  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:48.697396  312675 cri.go:89] found id: ""
	I0122 21:25:48.697436  312675 logs.go:282] 0 containers: []
	W0122 21:25:48.697450  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:48.697458  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:48.697532  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:48.739715  312675 cri.go:89] found id: ""
	I0122 21:25:48.739745  312675 logs.go:282] 0 containers: []
	W0122 21:25:48.739753  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:48.739760  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:48.739830  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:48.782593  312675 cri.go:89] found id: ""
	I0122 21:25:48.782632  312675 logs.go:282] 0 containers: []
	W0122 21:25:48.782644  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:48.782652  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:48.782726  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:48.823229  312675 cri.go:89] found id: ""
	I0122 21:25:48.823258  312675 logs.go:282] 0 containers: []
	W0122 21:25:48.823267  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:48.823273  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:48.823328  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:48.863777  312675 cri.go:89] found id: ""
	I0122 21:25:48.863805  312675 logs.go:282] 0 containers: []
	W0122 21:25:48.863815  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:48.863822  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:48.863952  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:48.908699  312675 cri.go:89] found id: ""
	I0122 21:25:48.908726  312675 logs.go:282] 0 containers: []
	W0122 21:25:48.908734  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:48.908740  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:48.908792  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:48.952189  312675 cri.go:89] found id: ""
	I0122 21:25:48.952223  312675 logs.go:282] 0 containers: []
	W0122 21:25:48.952232  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:48.952246  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:48.952259  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:49.001895  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:49.001925  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:49.055632  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:49.055683  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:49.071910  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:49.071942  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:49.156697  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:49.156721  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:49.156735  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:51.732258  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:51.754065  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:51.754232  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:51.840722  312675 cri.go:89] found id: ""
	I0122 21:25:51.840762  312675 logs.go:282] 0 containers: []
	W0122 21:25:51.840774  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:51.840783  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:51.840866  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:51.895853  312675 cri.go:89] found id: ""
	I0122 21:25:51.895897  312675 logs.go:282] 0 containers: []
	W0122 21:25:51.895916  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:51.895926  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:51.895998  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:51.940433  312675 cri.go:89] found id: ""
	I0122 21:25:51.940478  312675 logs.go:282] 0 containers: []
	W0122 21:25:51.940505  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:51.940515  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:51.940610  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:51.986258  312675 cri.go:89] found id: ""
	I0122 21:25:51.986288  312675 logs.go:282] 0 containers: []
	W0122 21:25:51.986298  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:51.986306  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:51.986364  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:52.034528  312675 cri.go:89] found id: ""
	I0122 21:25:52.034558  312675 logs.go:282] 0 containers: []
	W0122 21:25:52.034567  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:52.034575  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:52.034641  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:52.079155  312675 cri.go:89] found id: ""
	I0122 21:25:52.079221  312675 logs.go:282] 0 containers: []
	W0122 21:25:52.079231  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:52.079237  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:52.079311  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:52.125617  312675 cri.go:89] found id: ""
	I0122 21:25:52.125658  312675 logs.go:282] 0 containers: []
	W0122 21:25:52.125681  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:52.125689  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:52.125771  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:52.178006  312675 cri.go:89] found id: ""
	I0122 21:25:52.178037  312675 logs.go:282] 0 containers: []
	W0122 21:25:52.178049  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:52.178064  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:52.178080  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:52.245181  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:52.245252  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:52.261854  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:52.261906  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:52.353838  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:52.353881  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:52.353899  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:52.466279  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:52.466338  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:55.018354  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:55.034011  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:55.034112  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:55.075822  312675 cri.go:89] found id: ""
	I0122 21:25:55.075858  312675 logs.go:282] 0 containers: []
	W0122 21:25:55.075870  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:55.075878  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:55.075944  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:55.114914  312675 cri.go:89] found id: ""
	I0122 21:25:55.114945  312675 logs.go:282] 0 containers: []
	W0122 21:25:55.114956  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:55.114964  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:55.115036  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:55.158902  312675 cri.go:89] found id: ""
	I0122 21:25:55.158933  312675 logs.go:282] 0 containers: []
	W0122 21:25:55.158945  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:55.158954  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:55.159035  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:55.200282  312675 cri.go:89] found id: ""
	I0122 21:25:55.200325  312675 logs.go:282] 0 containers: []
	W0122 21:25:55.200337  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:55.200346  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:55.200410  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:55.243464  312675 cri.go:89] found id: ""
	I0122 21:25:55.243496  312675 logs.go:282] 0 containers: []
	W0122 21:25:55.243515  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:55.243523  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:55.243592  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:55.287183  312675 cri.go:89] found id: ""
	I0122 21:25:55.287219  312675 logs.go:282] 0 containers: []
	W0122 21:25:55.287230  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:55.287239  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:55.287308  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:55.331441  312675 cri.go:89] found id: ""
	I0122 21:25:55.331477  312675 logs.go:282] 0 containers: []
	W0122 21:25:55.331491  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:55.331499  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:55.331567  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:55.379841  312675 cri.go:89] found id: ""
	I0122 21:25:55.379876  312675 logs.go:282] 0 containers: []
	W0122 21:25:55.379887  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:55.379898  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:55.379918  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:25:55.430596  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:55.430634  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:55.488918  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:55.488976  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:55.507169  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:55.507207  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:55.591725  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:55.591758  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:55.591775  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:58.175408  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:25:58.189730  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:25:58.189812  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:25:58.232766  312675 cri.go:89] found id: ""
	I0122 21:25:58.232805  312675 logs.go:282] 0 containers: []
	W0122 21:25:58.232818  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:25:58.232830  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:25:58.232902  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:25:58.282867  312675 cri.go:89] found id: ""
	I0122 21:25:58.282902  312675 logs.go:282] 0 containers: []
	W0122 21:25:58.282915  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:25:58.282925  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:25:58.282995  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:25:58.327781  312675 cri.go:89] found id: ""
	I0122 21:25:58.327816  312675 logs.go:282] 0 containers: []
	W0122 21:25:58.327829  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:25:58.327837  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:25:58.327907  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:25:58.368734  312675 cri.go:89] found id: ""
	I0122 21:25:58.368767  312675 logs.go:282] 0 containers: []
	W0122 21:25:58.368779  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:25:58.368787  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:25:58.368856  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:25:58.412593  312675 cri.go:89] found id: ""
	I0122 21:25:58.412626  312675 logs.go:282] 0 containers: []
	W0122 21:25:58.412635  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:25:58.412640  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:25:58.412705  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:25:58.454305  312675 cri.go:89] found id: ""
	I0122 21:25:58.454345  312675 logs.go:282] 0 containers: []
	W0122 21:25:58.454359  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:25:58.454368  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:25:58.454440  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:25:58.497405  312675 cri.go:89] found id: ""
	I0122 21:25:58.497440  312675 logs.go:282] 0 containers: []
	W0122 21:25:58.497451  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:25:58.497458  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:25:58.497530  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:25:58.547239  312675 cri.go:89] found id: ""
	I0122 21:25:58.547298  312675 logs.go:282] 0 containers: []
	W0122 21:25:58.547313  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:25:58.547328  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:25:58.547346  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:25:58.612685  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:25:58.612740  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:25:58.631722  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:25:58.631776  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:25:58.728026  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:25:58.728055  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:25:58.728071  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:25:58.812509  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:25:58.812558  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:01.361106  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:01.377981  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:01.378076  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:01.426374  312675 cri.go:89] found id: ""
	I0122 21:26:01.426411  312675 logs.go:282] 0 containers: []
	W0122 21:26:01.426429  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:01.426439  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:01.426513  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:01.480288  312675 cri.go:89] found id: ""
	I0122 21:26:01.480321  312675 logs.go:282] 0 containers: []
	W0122 21:26:01.480334  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:01.480342  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:01.480401  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:01.530868  312675 cri.go:89] found id: ""
	I0122 21:26:01.530902  312675 logs.go:282] 0 containers: []
	W0122 21:26:01.530913  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:01.530923  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:01.530999  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:01.577538  312675 cri.go:89] found id: ""
	I0122 21:26:01.577577  312675 logs.go:282] 0 containers: []
	W0122 21:26:01.577589  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:01.577598  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:01.577668  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:01.621082  312675 cri.go:89] found id: ""
	I0122 21:26:01.621120  312675 logs.go:282] 0 containers: []
	W0122 21:26:01.621141  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:01.621150  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:01.621225  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:01.667008  312675 cri.go:89] found id: ""
	I0122 21:26:01.667050  312675 logs.go:282] 0 containers: []
	W0122 21:26:01.667065  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:01.667075  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:01.667152  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:01.711305  312675 cri.go:89] found id: ""
	I0122 21:26:01.711339  312675 logs.go:282] 0 containers: []
	W0122 21:26:01.711350  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:01.711358  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:01.711424  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:01.758439  312675 cri.go:89] found id: ""
	I0122 21:26:01.758478  312675 logs.go:282] 0 containers: []
	W0122 21:26:01.758491  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:01.758505  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:01.758523  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:01.840654  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:01.840703  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:01.894246  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:01.894285  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:01.954609  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:01.954657  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:01.975131  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:01.975173  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:02.082318  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:04.583895  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:04.599785  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:04.599854  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:04.643510  312675 cri.go:89] found id: ""
	I0122 21:26:04.643547  312675 logs.go:282] 0 containers: []
	W0122 21:26:04.643560  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:04.643569  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:04.643639  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:04.685480  312675 cri.go:89] found id: ""
	I0122 21:26:04.685511  312675 logs.go:282] 0 containers: []
	W0122 21:26:04.685520  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:04.685526  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:04.685593  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:04.727714  312675 cri.go:89] found id: ""
	I0122 21:26:04.727748  312675 logs.go:282] 0 containers: []
	W0122 21:26:04.727760  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:04.727769  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:04.727841  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:04.774065  312675 cri.go:89] found id: ""
	I0122 21:26:04.774109  312675 logs.go:282] 0 containers: []
	W0122 21:26:04.774132  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:04.774141  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:04.774248  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:04.822683  312675 cri.go:89] found id: ""
	I0122 21:26:04.822726  312675 logs.go:282] 0 containers: []
	W0122 21:26:04.822738  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:04.822751  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:04.822813  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:04.863511  312675 cri.go:89] found id: ""
	I0122 21:26:04.863546  312675 logs.go:282] 0 containers: []
	W0122 21:26:04.863556  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:04.863562  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:04.863621  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:04.913121  312675 cri.go:89] found id: ""
	I0122 21:26:04.913149  312675 logs.go:282] 0 containers: []
	W0122 21:26:04.913159  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:04.913169  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:04.913247  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:04.961517  312675 cri.go:89] found id: ""
	I0122 21:26:04.961549  312675 logs.go:282] 0 containers: []
	W0122 21:26:04.961559  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:04.961572  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:04.961589  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:05.014564  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:05.014611  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:05.033089  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:05.033136  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:05.126124  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:05.126150  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:05.126166  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:05.222214  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:05.222263  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:07.781296  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:07.797368  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:07.797458  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:07.842252  312675 cri.go:89] found id: ""
	I0122 21:26:07.842280  312675 logs.go:282] 0 containers: []
	W0122 21:26:07.842289  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:07.842298  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:07.842364  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:07.894563  312675 cri.go:89] found id: ""
	I0122 21:26:07.894600  312675 logs.go:282] 0 containers: []
	W0122 21:26:07.894611  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:07.894619  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:07.894690  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:07.936208  312675 cri.go:89] found id: ""
	I0122 21:26:07.936243  312675 logs.go:282] 0 containers: []
	W0122 21:26:07.936253  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:07.936260  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:07.936335  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:07.975922  312675 cri.go:89] found id: ""
	I0122 21:26:07.975954  312675 logs.go:282] 0 containers: []
	W0122 21:26:07.975962  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:07.975969  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:07.976037  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:08.015180  312675 cri.go:89] found id: ""
	I0122 21:26:08.015233  312675 logs.go:282] 0 containers: []
	W0122 21:26:08.015245  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:08.015253  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:08.015327  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:08.054306  312675 cri.go:89] found id: ""
	I0122 21:26:08.054342  312675 logs.go:282] 0 containers: []
	W0122 21:26:08.054354  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:08.054364  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:08.054436  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:08.096973  312675 cri.go:89] found id: ""
	I0122 21:26:08.097027  312675 logs.go:282] 0 containers: []
	W0122 21:26:08.097040  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:08.097048  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:08.097127  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:08.141465  312675 cri.go:89] found id: ""
	I0122 21:26:08.141504  312675 logs.go:282] 0 containers: []
	W0122 21:26:08.141513  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:08.141528  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:08.141559  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:08.195882  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:08.195939  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:08.213039  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:08.213081  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:08.292880  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:08.292915  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:08.292942  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:08.380628  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:08.380673  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:10.961936  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:10.981674  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:10.981756  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:11.027389  312675 cri.go:89] found id: ""
	I0122 21:26:11.027424  312675 logs.go:282] 0 containers: []
	W0122 21:26:11.027438  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:11.027448  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:11.027511  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:11.071414  312675 cri.go:89] found id: ""
	I0122 21:26:11.071451  312675 logs.go:282] 0 containers: []
	W0122 21:26:11.071463  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:11.071472  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:11.071547  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:11.125909  312675 cri.go:89] found id: ""
	I0122 21:26:11.125949  312675 logs.go:282] 0 containers: []
	W0122 21:26:11.125961  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:11.125970  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:11.126038  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:11.166744  312675 cri.go:89] found id: ""
	I0122 21:26:11.166778  312675 logs.go:282] 0 containers: []
	W0122 21:26:11.166789  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:11.166796  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:11.166870  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:11.210593  312675 cri.go:89] found id: ""
	I0122 21:26:11.210625  312675 logs.go:282] 0 containers: []
	W0122 21:26:11.210633  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:11.210640  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:11.210698  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:11.252266  312675 cri.go:89] found id: ""
	I0122 21:26:11.252308  312675 logs.go:282] 0 containers: []
	W0122 21:26:11.252320  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:11.252333  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:11.252408  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:11.298601  312675 cri.go:89] found id: ""
	I0122 21:26:11.298631  312675 logs.go:282] 0 containers: []
	W0122 21:26:11.298640  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:11.298648  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:11.298728  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:11.345710  312675 cri.go:89] found id: ""
	I0122 21:26:11.345745  312675 logs.go:282] 0 containers: []
	W0122 21:26:11.345757  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:11.345771  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:11.345788  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:11.423680  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:11.423748  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:11.441070  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:11.441116  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:11.528906  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:11.528944  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:11.528961  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:11.627364  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:11.627412  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:14.175438  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:14.194542  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:14.194642  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:14.235280  312675 cri.go:89] found id: ""
	I0122 21:26:14.235316  312675 logs.go:282] 0 containers: []
	W0122 21:26:14.235327  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:14.235336  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:14.235402  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:14.277931  312675 cri.go:89] found id: ""
	I0122 21:26:14.277964  312675 logs.go:282] 0 containers: []
	W0122 21:26:14.277975  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:14.277983  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:14.278055  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:14.319523  312675 cri.go:89] found id: ""
	I0122 21:26:14.319559  312675 logs.go:282] 0 containers: []
	W0122 21:26:14.319569  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:14.319578  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:14.319649  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:14.365068  312675 cri.go:89] found id: ""
	I0122 21:26:14.365097  312675 logs.go:282] 0 containers: []
	W0122 21:26:14.365106  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:14.365112  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:14.365173  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:14.407481  312675 cri.go:89] found id: ""
	I0122 21:26:14.407510  312675 logs.go:282] 0 containers: []
	W0122 21:26:14.407518  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:14.407525  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:14.407579  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:14.450868  312675 cri.go:89] found id: ""
	I0122 21:26:14.450907  312675 logs.go:282] 0 containers: []
	W0122 21:26:14.450928  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:14.450935  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:14.451004  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:14.495022  312675 cri.go:89] found id: ""
	I0122 21:26:14.495053  312675 logs.go:282] 0 containers: []
	W0122 21:26:14.495062  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:14.495069  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:14.495126  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:14.559915  312675 cri.go:89] found id: ""
	I0122 21:26:14.559952  312675 logs.go:282] 0 containers: []
	W0122 21:26:14.559972  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:14.559986  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:14.560004  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:14.611159  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:14.611190  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:14.663691  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:14.663738  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:14.679506  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:14.679537  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:14.769405  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:14.769432  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:14.769446  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:17.348110  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:17.368806  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:17.368900  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:17.418163  312675 cri.go:89] found id: ""
	I0122 21:26:17.418217  312675 logs.go:282] 0 containers: []
	W0122 21:26:17.418229  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:17.418240  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:17.418315  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:17.466607  312675 cri.go:89] found id: ""
	I0122 21:26:17.466639  312675 logs.go:282] 0 containers: []
	W0122 21:26:17.466650  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:17.466659  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:17.466723  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:17.508006  312675 cri.go:89] found id: ""
	I0122 21:26:17.508047  312675 logs.go:282] 0 containers: []
	W0122 21:26:17.508058  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:17.508067  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:17.508134  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:17.553500  312675 cri.go:89] found id: ""
	I0122 21:26:17.553536  312675 logs.go:282] 0 containers: []
	W0122 21:26:17.553547  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:17.553555  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:17.553628  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:17.605580  312675 cri.go:89] found id: ""
	I0122 21:26:17.605619  312675 logs.go:282] 0 containers: []
	W0122 21:26:17.605632  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:17.605641  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:17.605709  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:17.678450  312675 cri.go:89] found id: ""
	I0122 21:26:17.678484  312675 logs.go:282] 0 containers: []
	W0122 21:26:17.678496  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:17.678504  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:17.678571  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:17.729107  312675 cri.go:89] found id: ""
	I0122 21:26:17.729140  312675 logs.go:282] 0 containers: []
	W0122 21:26:17.729150  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:17.729158  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:17.729228  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:17.774740  312675 cri.go:89] found id: ""
	I0122 21:26:17.774778  312675 logs.go:282] 0 containers: []
	W0122 21:26:17.774790  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:17.774803  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:17.774820  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:17.879383  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:17.879423  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:17.934595  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:17.934708  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:18.006737  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:18.006776  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:18.024218  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:18.024271  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:18.124612  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:20.625718  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:20.641228  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:20.641321  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:20.697692  312675 cri.go:89] found id: ""
	I0122 21:26:20.697728  312675 logs.go:282] 0 containers: []
	W0122 21:26:20.697740  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:20.697749  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:20.697814  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:20.763128  312675 cri.go:89] found id: ""
	I0122 21:26:20.763164  312675 logs.go:282] 0 containers: []
	W0122 21:26:20.763175  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:20.763183  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:20.763251  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:20.818387  312675 cri.go:89] found id: ""
	I0122 21:26:20.818421  312675 logs.go:282] 0 containers: []
	W0122 21:26:20.818433  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:20.818442  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:20.818518  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:20.863386  312675 cri.go:89] found id: ""
	I0122 21:26:20.863418  312675 logs.go:282] 0 containers: []
	W0122 21:26:20.863426  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:20.863433  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:20.863499  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:20.914280  312675 cri.go:89] found id: ""
	I0122 21:26:20.914342  312675 logs.go:282] 0 containers: []
	W0122 21:26:20.914358  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:20.914371  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:20.914450  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:20.968900  312675 cri.go:89] found id: ""
	I0122 21:26:20.968939  312675 logs.go:282] 0 containers: []
	W0122 21:26:20.968951  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:20.968961  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:20.969038  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:21.029225  312675 cri.go:89] found id: ""
	I0122 21:26:21.029263  312675 logs.go:282] 0 containers: []
	W0122 21:26:21.029275  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:21.029285  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:21.029358  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:21.092234  312675 cri.go:89] found id: ""
	I0122 21:26:21.092268  312675 logs.go:282] 0 containers: []
	W0122 21:26:21.092276  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:21.092287  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:21.092305  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:21.157364  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:21.157404  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:21.230813  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:21.230851  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:21.248676  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:21.248716  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:21.352638  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:21.352675  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:21.352692  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:23.954341  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:23.973819  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:23.973912  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:24.029641  312675 cri.go:89] found id: ""
	I0122 21:26:24.029679  312675 logs.go:282] 0 containers: []
	W0122 21:26:24.029691  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:24.029699  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:24.029777  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:24.073514  312675 cri.go:89] found id: ""
	I0122 21:26:24.073567  312675 logs.go:282] 0 containers: []
	W0122 21:26:24.073580  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:24.073588  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:24.073666  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:24.121843  312675 cri.go:89] found id: ""
	I0122 21:26:24.121876  312675 logs.go:282] 0 containers: []
	W0122 21:26:24.121887  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:24.121896  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:24.121963  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:24.172933  312675 cri.go:89] found id: ""
	I0122 21:26:24.172968  312675 logs.go:282] 0 containers: []
	W0122 21:26:24.172978  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:24.172986  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:24.173061  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:24.225968  312675 cri.go:89] found id: ""
	I0122 21:26:24.226014  312675 logs.go:282] 0 containers: []
	W0122 21:26:24.226027  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:24.226036  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:24.226114  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:24.279547  312675 cri.go:89] found id: ""
	I0122 21:26:24.279580  312675 logs.go:282] 0 containers: []
	W0122 21:26:24.279592  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:24.279601  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:24.279676  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:24.333779  312675 cri.go:89] found id: ""
	I0122 21:26:24.333819  312675 logs.go:282] 0 containers: []
	W0122 21:26:24.333831  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:24.333840  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:24.333982  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:24.375979  312675 cri.go:89] found id: ""
	I0122 21:26:24.376013  312675 logs.go:282] 0 containers: []
	W0122 21:26:24.376025  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:24.376038  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:24.376055  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:24.487374  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:24.487427  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:24.543165  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:24.543219  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:24.609317  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:24.609364  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:24.629412  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:24.629459  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:24.739416  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:27.240249  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:27.259475  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:27.259564  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:27.301978  312675 cri.go:89] found id: ""
	I0122 21:26:27.302010  312675 logs.go:282] 0 containers: []
	W0122 21:26:27.302021  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:27.302027  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:27.302085  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:27.341322  312675 cri.go:89] found id: ""
	I0122 21:26:27.341359  312675 logs.go:282] 0 containers: []
	W0122 21:26:27.341370  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:27.341392  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:27.341458  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:27.379006  312675 cri.go:89] found id: ""
	I0122 21:26:27.379044  312675 logs.go:282] 0 containers: []
	W0122 21:26:27.379057  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:27.379066  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:27.379140  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:27.420380  312675 cri.go:89] found id: ""
	I0122 21:26:27.420417  312675 logs.go:282] 0 containers: []
	W0122 21:26:27.420429  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:27.420439  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:27.420510  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:27.461946  312675 cri.go:89] found id: ""
	I0122 21:26:27.461982  312675 logs.go:282] 0 containers: []
	W0122 21:26:27.461992  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:27.462005  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:27.462072  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:27.501771  312675 cri.go:89] found id: ""
	I0122 21:26:27.501810  312675 logs.go:282] 0 containers: []
	W0122 21:26:27.501820  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:27.501827  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:27.501887  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:27.550082  312675 cri.go:89] found id: ""
	I0122 21:26:27.550113  312675 logs.go:282] 0 containers: []
	W0122 21:26:27.550122  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:27.550129  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:27.550218  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:27.590432  312675 cri.go:89] found id: ""
	I0122 21:26:27.590461  312675 logs.go:282] 0 containers: []
	W0122 21:26:27.590469  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:27.590481  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:27.590495  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:27.643678  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:27.643728  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:27.659343  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:27.659387  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:27.740234  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:27.740260  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:27.740273  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:27.833548  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:27.833595  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:30.382374  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:30.397481  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:30.397579  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:30.445498  312675 cri.go:89] found id: ""
	I0122 21:26:30.445533  312675 logs.go:282] 0 containers: []
	W0122 21:26:30.445545  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:30.445553  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:30.445629  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:30.488874  312675 cri.go:89] found id: ""
	I0122 21:26:30.488907  312675 logs.go:282] 0 containers: []
	W0122 21:26:30.488919  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:30.488927  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:30.488995  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:30.534663  312675 cri.go:89] found id: ""
	I0122 21:26:30.534695  312675 logs.go:282] 0 containers: []
	W0122 21:26:30.534706  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:30.534715  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:30.534787  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:30.582303  312675 cri.go:89] found id: ""
	I0122 21:26:30.582344  312675 logs.go:282] 0 containers: []
	W0122 21:26:30.582357  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:30.582367  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:30.582456  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:30.622751  312675 cri.go:89] found id: ""
	I0122 21:26:30.622790  312675 logs.go:282] 0 containers: []
	W0122 21:26:30.622801  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:30.622807  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:30.622870  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:30.664603  312675 cri.go:89] found id: ""
	I0122 21:26:30.664640  312675 logs.go:282] 0 containers: []
	W0122 21:26:30.664652  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:30.664660  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:30.664738  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:30.705111  312675 cri.go:89] found id: ""
	I0122 21:26:30.705152  312675 logs.go:282] 0 containers: []
	W0122 21:26:30.705165  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:30.705174  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:30.705245  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:30.746636  312675 cri.go:89] found id: ""
	I0122 21:26:30.746676  312675 logs.go:282] 0 containers: []
	W0122 21:26:30.746687  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:30.746698  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:30.746714  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:30.803368  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:30.803417  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:30.822991  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:30.823036  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:30.908462  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:30.908496  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:30.908515  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:30.991359  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:30.991405  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:33.548131  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:33.564563  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:33.564654  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:33.605964  312675 cri.go:89] found id: ""
	I0122 21:26:33.606002  312675 logs.go:282] 0 containers: []
	W0122 21:26:33.606015  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:33.606023  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:33.606095  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:33.660692  312675 cri.go:89] found id: ""
	I0122 21:26:33.660730  312675 logs.go:282] 0 containers: []
	W0122 21:26:33.660742  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:33.660751  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:33.660851  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:33.711942  312675 cri.go:89] found id: ""
	I0122 21:26:33.711983  312675 logs.go:282] 0 containers: []
	W0122 21:26:33.711995  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:33.712017  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:33.712098  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:33.752058  312675 cri.go:89] found id: ""
	I0122 21:26:33.752096  312675 logs.go:282] 0 containers: []
	W0122 21:26:33.752109  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:33.752117  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:33.752193  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:33.800274  312675 cri.go:89] found id: ""
	I0122 21:26:33.800313  312675 logs.go:282] 0 containers: []
	W0122 21:26:33.800326  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:33.800335  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:33.800419  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:33.845735  312675 cri.go:89] found id: ""
	I0122 21:26:33.845850  312675 logs.go:282] 0 containers: []
	W0122 21:26:33.845885  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:33.845918  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:33.846044  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:33.888368  312675 cri.go:89] found id: ""
	I0122 21:26:33.888417  312675 logs.go:282] 0 containers: []
	W0122 21:26:33.888430  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:33.888438  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:33.888511  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:33.928378  312675 cri.go:89] found id: ""
	I0122 21:26:33.928417  312675 logs.go:282] 0 containers: []
	W0122 21:26:33.928427  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:33.928444  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:33.928466  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:33.982002  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:33.982061  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:33.999985  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:34.000042  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:34.080605  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:34.080639  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:34.080658  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:34.192192  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:34.192244  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:36.738738  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:36.754382  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:36.754461  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:36.794611  312675 cri.go:89] found id: ""
	I0122 21:26:36.794649  312675 logs.go:282] 0 containers: []
	W0122 21:26:36.794658  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:36.794666  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:36.794733  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:36.839006  312675 cri.go:89] found id: ""
	I0122 21:26:36.839037  312675 logs.go:282] 0 containers: []
	W0122 21:26:36.839046  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:36.839053  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:36.839107  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:36.881249  312675 cri.go:89] found id: ""
	I0122 21:26:36.881277  312675 logs.go:282] 0 containers: []
	W0122 21:26:36.881286  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:36.881294  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:36.881363  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:36.922551  312675 cri.go:89] found id: ""
	I0122 21:26:36.922582  312675 logs.go:282] 0 containers: []
	W0122 21:26:36.922592  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:36.922600  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:36.922671  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:36.963099  312675 cri.go:89] found id: ""
	I0122 21:26:36.963126  312675 logs.go:282] 0 containers: []
	W0122 21:26:36.963135  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:36.963141  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:36.963214  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:37.002612  312675 cri.go:89] found id: ""
	I0122 21:26:37.002652  312675 logs.go:282] 0 containers: []
	W0122 21:26:37.002666  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:37.002675  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:37.002753  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:37.042309  312675 cri.go:89] found id: ""
	I0122 21:26:37.042350  312675 logs.go:282] 0 containers: []
	W0122 21:26:37.042363  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:37.042373  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:37.042444  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:37.079847  312675 cri.go:89] found id: ""
	I0122 21:26:37.079887  312675 logs.go:282] 0 containers: []
	W0122 21:26:37.079900  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:37.079914  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:37.079931  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:37.132564  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:37.132608  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:37.148266  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:37.148301  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:37.233318  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:37.233340  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:37.233355  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:37.313175  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:37.313228  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:39.857508  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:39.873100  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:39.873201  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:39.916957  312675 cri.go:89] found id: ""
	I0122 21:26:39.917000  312675 logs.go:282] 0 containers: []
	W0122 21:26:39.917013  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:39.917027  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:39.917099  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:39.957770  312675 cri.go:89] found id: ""
	I0122 21:26:39.957810  312675 logs.go:282] 0 containers: []
	W0122 21:26:39.957822  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:39.957832  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:39.957901  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:40.002210  312675 cri.go:89] found id: ""
	I0122 21:26:40.002246  312675 logs.go:282] 0 containers: []
	W0122 21:26:40.002260  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:40.002268  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:40.002341  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:40.045541  312675 cri.go:89] found id: ""
	I0122 21:26:40.045579  312675 logs.go:282] 0 containers: []
	W0122 21:26:40.045591  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:40.045600  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:40.045682  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:40.086790  312675 cri.go:89] found id: ""
	I0122 21:26:40.086821  312675 logs.go:282] 0 containers: []
	W0122 21:26:40.086833  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:40.086842  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:40.086909  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:40.126270  312675 cri.go:89] found id: ""
	I0122 21:26:40.126316  312675 logs.go:282] 0 containers: []
	W0122 21:26:40.126329  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:40.126339  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:40.126408  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:40.172421  312675 cri.go:89] found id: ""
	I0122 21:26:40.172458  312675 logs.go:282] 0 containers: []
	W0122 21:26:40.172471  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:40.172480  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:40.172546  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:40.215462  312675 cri.go:89] found id: ""
	I0122 21:26:40.215502  312675 logs.go:282] 0 containers: []
	W0122 21:26:40.215514  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:40.215529  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:40.215544  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:40.312924  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:40.312976  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:40.359025  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:40.359064  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:40.417431  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:40.417484  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:40.432084  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:40.432123  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:40.515620  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:43.015932  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:43.031531  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:43.031622  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:43.078018  312675 cri.go:89] found id: ""
	I0122 21:26:43.078051  312675 logs.go:282] 0 containers: []
	W0122 21:26:43.078061  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:43.078068  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:43.078128  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:43.123478  312675 cri.go:89] found id: ""
	I0122 21:26:43.123515  312675 logs.go:282] 0 containers: []
	W0122 21:26:43.123528  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:43.123536  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:43.123606  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:43.164605  312675 cri.go:89] found id: ""
	I0122 21:26:43.164638  312675 logs.go:282] 0 containers: []
	W0122 21:26:43.164647  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:43.164655  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:43.164713  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:43.203245  312675 cri.go:89] found id: ""
	I0122 21:26:43.203271  312675 logs.go:282] 0 containers: []
	W0122 21:26:43.203280  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:43.203286  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:43.203344  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:43.244158  312675 cri.go:89] found id: ""
	I0122 21:26:43.244194  312675 logs.go:282] 0 containers: []
	W0122 21:26:43.244206  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:43.244215  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:43.244286  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:43.287178  312675 cri.go:89] found id: ""
	I0122 21:26:43.287216  312675 logs.go:282] 0 containers: []
	W0122 21:26:43.287227  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:43.287235  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:43.287298  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:43.327455  312675 cri.go:89] found id: ""
	I0122 21:26:43.327486  312675 logs.go:282] 0 containers: []
	W0122 21:26:43.327496  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:43.327504  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:43.327569  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:43.367591  312675 cri.go:89] found id: ""
	I0122 21:26:43.367630  312675 logs.go:282] 0 containers: []
	W0122 21:26:43.367642  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:43.367656  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:43.367672  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:43.425447  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:43.425496  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:43.440264  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:43.440302  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:43.526393  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:43.526425  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:43.526444  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:43.610131  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:43.610209  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:46.154948  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:46.169545  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:46.169629  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:46.219026  312675 cri.go:89] found id: ""
	I0122 21:26:46.219063  312675 logs.go:282] 0 containers: []
	W0122 21:26:46.219074  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:46.219083  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:46.219157  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:46.258444  312675 cri.go:89] found id: ""
	I0122 21:26:46.258480  312675 logs.go:282] 0 containers: []
	W0122 21:26:46.258490  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:46.258497  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:46.258557  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:46.297268  312675 cri.go:89] found id: ""
	I0122 21:26:46.297301  312675 logs.go:282] 0 containers: []
	W0122 21:26:46.297310  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:46.297318  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:46.297375  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:46.337299  312675 cri.go:89] found id: ""
	I0122 21:26:46.337331  312675 logs.go:282] 0 containers: []
	W0122 21:26:46.337342  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:46.337351  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:46.337413  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:46.375669  312675 cri.go:89] found id: ""
	I0122 21:26:46.375702  312675 logs.go:282] 0 containers: []
	W0122 21:26:46.375711  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:46.375725  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:46.375781  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:46.416029  312675 cri.go:89] found id: ""
	I0122 21:26:46.416065  312675 logs.go:282] 0 containers: []
	W0122 21:26:46.416074  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:46.416083  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:46.416140  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:46.455247  312675 cri.go:89] found id: ""
	I0122 21:26:46.455282  312675 logs.go:282] 0 containers: []
	W0122 21:26:46.455291  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:46.455297  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:46.455355  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:46.493811  312675 cri.go:89] found id: ""
	I0122 21:26:46.493842  312675 logs.go:282] 0 containers: []
	W0122 21:26:46.493853  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:46.493866  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:46.493896  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:46.537937  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:46.537971  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:46.589085  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:46.589127  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:46.603855  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:46.603894  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:46.682816  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:46.682837  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:46.682852  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:49.264691  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:49.286290  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:49.286384  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:49.329580  312675 cri.go:89] found id: ""
	I0122 21:26:49.329621  312675 logs.go:282] 0 containers: []
	W0122 21:26:49.329634  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:49.329643  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:49.329719  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:49.371700  312675 cri.go:89] found id: ""
	I0122 21:26:49.371729  312675 logs.go:282] 0 containers: []
	W0122 21:26:49.371737  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:49.371743  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:49.371805  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:49.416810  312675 cri.go:89] found id: ""
	I0122 21:26:49.416844  312675 logs.go:282] 0 containers: []
	W0122 21:26:49.416856  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:49.416876  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:49.416952  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:49.460733  312675 cri.go:89] found id: ""
	I0122 21:26:49.460768  312675 logs.go:282] 0 containers: []
	W0122 21:26:49.460779  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:49.460788  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:49.460883  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:49.503652  312675 cri.go:89] found id: ""
	I0122 21:26:49.503681  312675 logs.go:282] 0 containers: []
	W0122 21:26:49.503689  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:49.503696  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:49.503768  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:49.556762  312675 cri.go:89] found id: ""
	I0122 21:26:49.556790  312675 logs.go:282] 0 containers: []
	W0122 21:26:49.556798  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:49.556805  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:49.556893  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:49.604254  312675 cri.go:89] found id: ""
	I0122 21:26:49.604290  312675 logs.go:282] 0 containers: []
	W0122 21:26:49.604300  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:49.604306  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:49.604376  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:49.652154  312675 cri.go:89] found id: ""
	I0122 21:26:49.652190  312675 logs.go:282] 0 containers: []
	W0122 21:26:49.652203  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:49.652226  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:49.652243  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:49.735543  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:49.735587  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:49.786666  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:49.786700  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:49.852999  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:49.853044  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:49.870230  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:49.870277  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:49.972477  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:52.474419  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:52.492720  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:52.492800  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:52.540649  312675 cri.go:89] found id: ""
	I0122 21:26:52.540692  312675 logs.go:282] 0 containers: []
	W0122 21:26:52.540704  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:52.540713  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:52.540790  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:52.584625  312675 cri.go:89] found id: ""
	I0122 21:26:52.584661  312675 logs.go:282] 0 containers: []
	W0122 21:26:52.584682  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:52.584690  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:52.584765  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:52.635864  312675 cri.go:89] found id: ""
	I0122 21:26:52.635896  312675 logs.go:282] 0 containers: []
	W0122 21:26:52.635907  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:52.635924  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:52.635992  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:52.687070  312675 cri.go:89] found id: ""
	I0122 21:26:52.687106  312675 logs.go:282] 0 containers: []
	W0122 21:26:52.687118  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:52.687128  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:52.687209  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:52.731614  312675 cri.go:89] found id: ""
	I0122 21:26:52.731642  312675 logs.go:282] 0 containers: []
	W0122 21:26:52.731650  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:52.731657  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:52.731730  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:52.785089  312675 cri.go:89] found id: ""
	I0122 21:26:52.785129  312675 logs.go:282] 0 containers: []
	W0122 21:26:52.785142  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:52.785151  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:52.785231  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:52.830275  312675 cri.go:89] found id: ""
	I0122 21:26:52.830312  312675 logs.go:282] 0 containers: []
	W0122 21:26:52.830324  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:52.830332  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:52.830413  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:52.887043  312675 cri.go:89] found id: ""
	I0122 21:26:52.887082  312675 logs.go:282] 0 containers: []
	W0122 21:26:52.887093  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:52.887108  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:52.887123  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:52.954267  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:52.954317  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:52.971160  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:52.971210  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:53.067863  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:53.067886  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:53.067899  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:53.166708  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:53.166752  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:55.713970  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:55.741429  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:55.741516  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:55.798285  312675 cri.go:89] found id: ""
	I0122 21:26:55.798320  312675 logs.go:282] 0 containers: []
	W0122 21:26:55.798331  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:55.798339  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:55.798407  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:55.876424  312675 cri.go:89] found id: ""
	I0122 21:26:55.876458  312675 logs.go:282] 0 containers: []
	W0122 21:26:55.876470  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:55.876478  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:55.876550  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:55.941180  312675 cri.go:89] found id: ""
	I0122 21:26:55.941216  312675 logs.go:282] 0 containers: []
	W0122 21:26:55.941226  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:55.941232  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:55.941288  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:55.987759  312675 cri.go:89] found id: ""
	I0122 21:26:55.987791  312675 logs.go:282] 0 containers: []
	W0122 21:26:55.987802  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:55.987810  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:55.987892  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:56.034224  312675 cri.go:89] found id: ""
	I0122 21:26:56.034259  312675 logs.go:282] 0 containers: []
	W0122 21:26:56.034270  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:56.034279  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:56.034357  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:56.077698  312675 cri.go:89] found id: ""
	I0122 21:26:56.077733  312675 logs.go:282] 0 containers: []
	W0122 21:26:56.077747  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:56.077756  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:56.077840  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:56.137128  312675 cri.go:89] found id: ""
	I0122 21:26:56.137166  312675 logs.go:282] 0 containers: []
	W0122 21:26:56.137177  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:56.137186  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:56.137272  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:56.188907  312675 cri.go:89] found id: ""
	I0122 21:26:56.188937  312675 logs.go:282] 0 containers: []
	W0122 21:26:56.188956  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:56.188970  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:56.189023  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:56.237086  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:56.237126  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:26:56.304951  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:56.305004  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:56.326235  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:56.326273  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:56.426697  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:56.426730  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:56.426748  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:59.039431  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:26:59.060138  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:26:59.060239  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:26:59.105845  312675 cri.go:89] found id: ""
	I0122 21:26:59.105879  312675 logs.go:282] 0 containers: []
	W0122 21:26:59.105890  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:26:59.105899  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:26:59.105972  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:26:59.155522  312675 cri.go:89] found id: ""
	I0122 21:26:59.155569  312675 logs.go:282] 0 containers: []
	W0122 21:26:59.155581  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:26:59.155592  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:26:59.155739  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:26:59.205016  312675 cri.go:89] found id: ""
	I0122 21:26:59.205052  312675 logs.go:282] 0 containers: []
	W0122 21:26:59.205063  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:26:59.205072  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:26:59.205143  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:26:59.256229  312675 cri.go:89] found id: ""
	I0122 21:26:59.256270  312675 logs.go:282] 0 containers: []
	W0122 21:26:59.256283  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:26:59.256291  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:26:59.256363  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:26:59.306165  312675 cri.go:89] found id: ""
	I0122 21:26:59.306242  312675 logs.go:282] 0 containers: []
	W0122 21:26:59.306252  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:26:59.306262  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:26:59.306336  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:26:59.351017  312675 cri.go:89] found id: ""
	I0122 21:26:59.351054  312675 logs.go:282] 0 containers: []
	W0122 21:26:59.351066  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:26:59.351076  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:26:59.351146  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:26:59.393497  312675 cri.go:89] found id: ""
	I0122 21:26:59.393531  312675 logs.go:282] 0 containers: []
	W0122 21:26:59.393543  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:26:59.393552  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:26:59.393624  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:26:59.443667  312675 cri.go:89] found id: ""
	I0122 21:26:59.443705  312675 logs.go:282] 0 containers: []
	W0122 21:26:59.443716  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:26:59.443731  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:26:59.443749  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:26:59.459822  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:26:59.459872  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:26:59.545316  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:26:59.545347  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:26:59.545365  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:26:59.624737  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:26:59.624781  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:26:59.674512  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:26:59.674558  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:02.240873  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:02.255703  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:02.255787  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:02.302589  312675 cri.go:89] found id: ""
	I0122 21:27:02.302624  312675 logs.go:282] 0 containers: []
	W0122 21:27:02.302637  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:02.302646  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:02.302711  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:02.348043  312675 cri.go:89] found id: ""
	I0122 21:27:02.348074  312675 logs.go:282] 0 containers: []
	W0122 21:27:02.348085  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:02.348093  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:02.348163  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:02.399251  312675 cri.go:89] found id: ""
	I0122 21:27:02.399289  312675 logs.go:282] 0 containers: []
	W0122 21:27:02.399301  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:02.399310  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:02.399390  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:02.460117  312675 cri.go:89] found id: ""
	I0122 21:27:02.460166  312675 logs.go:282] 0 containers: []
	W0122 21:27:02.460177  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:02.460186  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:02.460271  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:02.508934  312675 cri.go:89] found id: ""
	I0122 21:27:02.508971  312675 logs.go:282] 0 containers: []
	W0122 21:27:02.508983  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:02.508992  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:02.509058  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:02.559048  312675 cri.go:89] found id: ""
	I0122 21:27:02.559080  312675 logs.go:282] 0 containers: []
	W0122 21:27:02.559092  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:02.559100  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:02.559166  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:02.611561  312675 cri.go:89] found id: ""
	I0122 21:27:02.611600  312675 logs.go:282] 0 containers: []
	W0122 21:27:02.611612  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:02.611621  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:02.611698  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:02.664956  312675 cri.go:89] found id: ""
	I0122 21:27:02.665001  312675 logs.go:282] 0 containers: []
	W0122 21:27:02.665012  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:02.665026  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:02.665041  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:02.742202  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:02.742257  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:02.766532  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:02.766650  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:02.890578  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:02.890680  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:02.890713  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:03.008288  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:03.008340  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:05.566782  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:05.580942  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:05.581029  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:05.622741  312675 cri.go:89] found id: ""
	I0122 21:27:05.622777  312675 logs.go:282] 0 containers: []
	W0122 21:27:05.622786  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:05.622793  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:05.622860  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:05.664402  312675 cri.go:89] found id: ""
	I0122 21:27:05.664440  312675 logs.go:282] 0 containers: []
	W0122 21:27:05.664452  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:05.664461  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:05.664533  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:05.707151  312675 cri.go:89] found id: ""
	I0122 21:27:05.707193  312675 logs.go:282] 0 containers: []
	W0122 21:27:05.707206  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:05.707215  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:05.707291  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:05.747357  312675 cri.go:89] found id: ""
	I0122 21:27:05.747395  312675 logs.go:282] 0 containers: []
	W0122 21:27:05.747406  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:05.747415  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:05.747489  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:05.787808  312675 cri.go:89] found id: ""
	I0122 21:27:05.787914  312675 logs.go:282] 0 containers: []
	W0122 21:27:05.787939  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:05.787958  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:05.788052  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:05.835295  312675 cri.go:89] found id: ""
	I0122 21:27:05.835341  312675 logs.go:282] 0 containers: []
	W0122 21:27:05.835353  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:05.835361  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:05.835431  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:05.882747  312675 cri.go:89] found id: ""
	I0122 21:27:05.882785  312675 logs.go:282] 0 containers: []
	W0122 21:27:05.882798  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:05.882807  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:05.882889  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:05.930285  312675 cri.go:89] found id: ""
	I0122 21:27:05.930321  312675 logs.go:282] 0 containers: []
	W0122 21:27:05.930334  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:05.930348  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:05.930366  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:05.990932  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:05.990996  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:06.010340  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:06.010388  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:06.097410  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:06.097438  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:06.097454  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:06.201739  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:06.201801  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:08.761627  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:08.777089  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:08.777179  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:08.832087  312675 cri.go:89] found id: ""
	I0122 21:27:08.832124  312675 logs.go:282] 0 containers: []
	W0122 21:27:08.832137  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:08.832147  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:08.832232  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:08.889679  312675 cri.go:89] found id: ""
	I0122 21:27:08.889711  312675 logs.go:282] 0 containers: []
	W0122 21:27:08.889722  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:08.889729  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:08.889793  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:08.942014  312675 cri.go:89] found id: ""
	I0122 21:27:08.942054  312675 logs.go:282] 0 containers: []
	W0122 21:27:08.942067  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:08.942076  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:08.942157  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:08.995834  312675 cri.go:89] found id: ""
	I0122 21:27:08.995872  312675 logs.go:282] 0 containers: []
	W0122 21:27:08.995883  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:08.995892  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:08.995966  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:09.046818  312675 cri.go:89] found id: ""
	I0122 21:27:09.046854  312675 logs.go:282] 0 containers: []
	W0122 21:27:09.046865  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:09.046874  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:09.046944  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:09.097375  312675 cri.go:89] found id: ""
	I0122 21:27:09.097410  312675 logs.go:282] 0 containers: []
	W0122 21:27:09.097422  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:09.097438  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:09.097502  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:09.146732  312675 cri.go:89] found id: ""
	I0122 21:27:09.146770  312675 logs.go:282] 0 containers: []
	W0122 21:27:09.146783  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:09.146791  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:09.146863  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:09.193627  312675 cri.go:89] found id: ""
	I0122 21:27:09.193664  312675 logs.go:282] 0 containers: []
	W0122 21:27:09.193675  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:09.193690  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:09.193714  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:09.260416  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:09.260471  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:09.281528  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:09.281575  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:09.362059  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:09.362100  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:09.362118  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:09.451750  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:09.451795  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:12.007830  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:12.027611  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:12.027694  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:12.072902  312675 cri.go:89] found id: ""
	I0122 21:27:12.072940  312675 logs.go:282] 0 containers: []
	W0122 21:27:12.072956  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:12.072964  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:12.073033  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:12.120197  312675 cri.go:89] found id: ""
	I0122 21:27:12.120236  312675 logs.go:282] 0 containers: []
	W0122 21:27:12.120248  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:12.120258  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:12.120329  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:12.166052  312675 cri.go:89] found id: ""
	I0122 21:27:12.166086  312675 logs.go:282] 0 containers: []
	W0122 21:27:12.166098  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:12.166106  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:12.166198  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:12.208417  312675 cri.go:89] found id: ""
	I0122 21:27:12.208447  312675 logs.go:282] 0 containers: []
	W0122 21:27:12.208456  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:12.208465  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:12.208521  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:12.249645  312675 cri.go:89] found id: ""
	I0122 21:27:12.249689  312675 logs.go:282] 0 containers: []
	W0122 21:27:12.249703  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:12.249712  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:12.249786  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:12.295971  312675 cri.go:89] found id: ""
	I0122 21:27:12.296000  312675 logs.go:282] 0 containers: []
	W0122 21:27:12.296009  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:12.296015  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:12.296084  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:12.336102  312675 cri.go:89] found id: ""
	I0122 21:27:12.336142  312675 logs.go:282] 0 containers: []
	W0122 21:27:12.336155  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:12.336171  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:12.336241  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:12.387860  312675 cri.go:89] found id: ""
	I0122 21:27:12.387928  312675 logs.go:282] 0 containers: []
	W0122 21:27:12.387946  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:12.387961  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:12.387982  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:12.483707  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:12.483740  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:12.483759  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:12.568659  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:12.568707  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:12.625938  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:12.625976  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:12.697065  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:12.697116  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:15.214329  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:15.235785  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:15.235906  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:15.285768  312675 cri.go:89] found id: ""
	I0122 21:27:15.285806  312675 logs.go:282] 0 containers: []
	W0122 21:27:15.285819  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:15.285827  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:15.285894  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:15.332194  312675 cri.go:89] found id: ""
	I0122 21:27:15.332229  312675 logs.go:282] 0 containers: []
	W0122 21:27:15.332240  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:15.332249  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:15.332331  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:15.380094  312675 cri.go:89] found id: ""
	I0122 21:27:15.380131  312675 logs.go:282] 0 containers: []
	W0122 21:27:15.380143  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:15.380152  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:15.380224  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:15.429926  312675 cri.go:89] found id: ""
	I0122 21:27:15.429968  312675 logs.go:282] 0 containers: []
	W0122 21:27:15.429985  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:15.429994  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:15.430067  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:15.481951  312675 cri.go:89] found id: ""
	I0122 21:27:15.481990  312675 logs.go:282] 0 containers: []
	W0122 21:27:15.482002  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:15.482010  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:15.482085  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:15.533315  312675 cri.go:89] found id: ""
	I0122 21:27:15.533350  312675 logs.go:282] 0 containers: []
	W0122 21:27:15.533361  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:15.533370  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:15.533444  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:15.590863  312675 cri.go:89] found id: ""
	I0122 21:27:15.590907  312675 logs.go:282] 0 containers: []
	W0122 21:27:15.590920  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:15.590930  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:15.591011  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:15.640965  312675 cri.go:89] found id: ""
	I0122 21:27:15.640997  312675 logs.go:282] 0 containers: []
	W0122 21:27:15.641007  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:15.641021  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:15.641039  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:15.701995  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:15.702043  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:15.721127  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:15.721165  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:15.837898  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:15.837930  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:15.837947  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:15.963802  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:15.963853  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:18.522012  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:18.539180  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:18.539246  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:18.586063  312675 cri.go:89] found id: ""
	I0122 21:27:18.586104  312675 logs.go:282] 0 containers: []
	W0122 21:27:18.586125  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:18.586134  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:18.586227  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:18.633059  312675 cri.go:89] found id: ""
	I0122 21:27:18.633100  312675 logs.go:282] 0 containers: []
	W0122 21:27:18.633114  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:18.633123  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:18.633184  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:18.680175  312675 cri.go:89] found id: ""
	I0122 21:27:18.680218  312675 logs.go:282] 0 containers: []
	W0122 21:27:18.680231  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:18.680239  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:18.680311  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:18.722429  312675 cri.go:89] found id: ""
	I0122 21:27:18.722463  312675 logs.go:282] 0 containers: []
	W0122 21:27:18.722476  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:18.722485  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:18.722554  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:18.771818  312675 cri.go:89] found id: ""
	I0122 21:27:18.771866  312675 logs.go:282] 0 containers: []
	W0122 21:27:18.771878  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:18.771884  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:18.771968  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:18.840249  312675 cri.go:89] found id: ""
	I0122 21:27:18.840289  312675 logs.go:282] 0 containers: []
	W0122 21:27:18.840301  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:18.840311  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:18.840384  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:18.890533  312675 cri.go:89] found id: ""
	I0122 21:27:18.890572  312675 logs.go:282] 0 containers: []
	W0122 21:27:18.890585  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:18.890593  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:18.890674  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:18.936683  312675 cri.go:89] found id: ""
	I0122 21:27:18.936711  312675 logs.go:282] 0 containers: []
	W0122 21:27:18.936720  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:18.936731  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:18.936745  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:18.994817  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:18.994862  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:19.011004  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:19.011062  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:19.098002  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:19.098032  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:19.098051  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:19.182902  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:19.182949  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:21.730303  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:21.747123  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:21.747212  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:21.793769  312675 cri.go:89] found id: ""
	I0122 21:27:21.793807  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.793827  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:21.793835  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:21.793912  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:21.840045  312675 cri.go:89] found id: ""
	I0122 21:27:21.840088  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.840101  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:21.840109  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:21.840187  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:21.885265  312675 cri.go:89] found id: ""
	I0122 21:27:21.885302  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.885314  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:21.885323  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:21.885404  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:21.937734  312675 cri.go:89] found id: ""
	I0122 21:27:21.937768  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.937777  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:21.937783  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:21.937844  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:21.989238  312675 cri.go:89] found id: ""
	I0122 21:27:21.989276  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.989294  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:21.989300  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:21.989377  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:22.035837  312675 cri.go:89] found id: ""
	I0122 21:27:22.035921  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.035934  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:22.035944  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:22.036016  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:22.091690  312675 cri.go:89] found id: ""
	I0122 21:27:22.091731  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.091745  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:22.091754  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:22.091828  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:22.149775  312675 cri.go:89] found id: ""
	I0122 21:27:22.149888  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.149913  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:22.149958  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:22.150005  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:22.213610  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:22.213665  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:22.233970  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:22.234014  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:22.318579  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:22.318606  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:22.318622  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:22.422850  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:22.422899  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:24.974063  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:24.990751  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:24.990850  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:25.036044  312675 cri.go:89] found id: ""
	I0122 21:27:25.036082  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.036094  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:25.036103  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:25.036173  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:25.078700  312675 cri.go:89] found id: ""
	I0122 21:27:25.078736  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.078748  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:25.078759  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:25.078829  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:25.134919  312675 cri.go:89] found id: ""
	I0122 21:27:25.134971  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.134984  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:25.134994  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:25.135075  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:25.183649  312675 cri.go:89] found id: ""
	I0122 21:27:25.183684  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.183695  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:25.183704  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:25.183778  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:25.240357  312675 cri.go:89] found id: ""
	I0122 21:27:25.240401  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.240414  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:25.240425  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:25.240555  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:25.284093  312675 cri.go:89] found id: ""
	I0122 21:27:25.284132  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.284141  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:25.284149  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:25.284218  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:25.328590  312675 cri.go:89] found id: ""
	I0122 21:27:25.328621  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.328632  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:25.328641  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:25.328710  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:25.378479  312675 cri.go:89] found id: ""
	I0122 21:27:25.378517  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.378529  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:25.378543  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:25.378559  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:25.433767  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:25.433800  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:25.497717  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:25.497767  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:25.530904  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:25.530961  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:25.631676  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:25.631701  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:25.631717  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:28.221852  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:28.236702  312675 kubeadm.go:597] duration metric: took 4m3.036103838s to restartPrimaryControlPlane
	W0122 21:27:28.236803  312675 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0122 21:27:28.236837  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:27:30.647940  312675 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.411072952s)
	I0122 21:27:30.648042  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:27:30.669610  312675 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:27:30.684678  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:27:30.698168  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:27:30.698232  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:27:30.698285  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:27:30.708774  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:27:30.708855  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:27:30.720213  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:27:30.731121  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:27:30.731207  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:27:30.743153  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:27:30.754160  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:27:30.754262  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:27:30.765730  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:27:30.776902  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:27:30.776990  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:27:30.788361  312675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:27:31.040925  312675 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:29:27.087272  312675 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:29:27.087393  312675 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:29:27.089567  312675 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:29:27.089666  312675 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:29:27.089781  312675 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:29:27.089958  312675 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:29:27.090084  312675 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:29:27.090165  312675 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:29:27.092167  312675 out.go:235]   - Generating certificates and keys ...
	I0122 21:29:27.092283  312675 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:29:27.092358  312675 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:29:27.092462  312675 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:29:27.092535  312675 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:29:27.092611  312675 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:29:27.092682  312675 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:29:27.092771  312675 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:29:27.092848  312675 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:29:27.092976  312675 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:29:27.093094  312675 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:29:27.093166  312675 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:29:27.093261  312675 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:29:27.093350  312675 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:29:27.093398  312675 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:29:27.093476  312675 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:29:27.093559  312675 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:29:27.093650  312675 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:29:27.093720  312675 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:29:27.093761  312675 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:29:27.093818  312675 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:29:27.095338  312675 out.go:235]   - Booting up control plane ...
	I0122 21:29:27.095468  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:29:27.095555  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:29:27.095632  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:29:27.095710  312675 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:29:27.095838  312675 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:29:27.095878  312675 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:29:27.095937  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096106  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096195  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096453  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096565  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096796  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096867  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.097090  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.097177  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.097367  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.097386  312675 kubeadm.go:310] 
	I0122 21:29:27.097443  312675 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:29:27.097512  312675 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:29:27.097527  312675 kubeadm.go:310] 
	I0122 21:29:27.097557  312675 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:29:27.097611  312675 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:29:27.097761  312675 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:29:27.097783  312675 kubeadm.go:310] 
	I0122 21:29:27.097878  312675 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:29:27.097928  312675 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:29:27.097955  312675 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:29:27.097962  312675 kubeadm.go:310] 
	I0122 21:29:27.098055  312675 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:29:27.098120  312675 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:29:27.098127  312675 kubeadm.go:310] 
	I0122 21:29:27.098272  312675 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:29:27.098357  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:29:27.098434  312675 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:29:27.098533  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:29:27.098585  312675 kubeadm.go:310] 
	W0122 21:29:27.098687  312675 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0122 21:29:27.098731  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:29:27.599261  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:29:27.617267  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:29:27.629164  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:29:27.629190  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:29:27.629255  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:29:27.641001  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:29:27.641072  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:29:27.653446  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:29:27.666334  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:29:27.666426  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:29:27.678551  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:29:27.689687  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:29:27.689757  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:29:27.702030  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:29:27.713507  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:29:27.713585  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:29:27.726067  312675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:29:27.816417  312675 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:29:27.816555  312675 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:29:27.995432  312675 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:29:27.995599  312675 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:29:27.995745  312675 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:29:28.218104  312675 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:29:28.220056  312675 out.go:235]   - Generating certificates and keys ...
	I0122 21:29:28.220190  312675 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:29:28.220278  312675 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:29:28.220383  312675 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:29:28.220486  312675 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:29:28.220573  312675 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:29:28.220648  312675 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:29:28.220880  312675 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:29:28.221175  312675 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:29:28.222058  312675 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:29:28.222351  312675 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:29:28.222442  312675 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:29:28.222530  312675 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:29:28.304455  312675 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:29:28.572192  312675 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:29:28.869356  312675 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:29:29.053609  312675 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:29:29.082264  312675 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:29:29.082429  312675 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:29:29.082503  312675 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:29:29.253931  312675 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:29:29.256894  312675 out.go:235]   - Booting up control plane ...
	I0122 21:29:29.257044  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:29:29.267513  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:29:29.269154  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:29:29.270276  312675 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:29:29.274228  312675 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:30:09.277116  312675 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:30:09.277238  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:09.277504  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:14.278173  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:14.278454  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:24.278945  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:24.279149  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:44.279492  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:44.279715  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:31:24.278351  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:31:24.278612  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:31:24.278628  312675 kubeadm.go:310] 
	I0122 21:31:24.278664  312675 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:31:24.278723  312675 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:31:24.278735  312675 kubeadm.go:310] 
	I0122 21:31:24.278775  312675 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:31:24.278827  312675 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:31:24.278956  312675 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:31:24.278981  312675 kubeadm.go:310] 
	I0122 21:31:24.279066  312675 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:31:24.279109  312675 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:31:24.279140  312675 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:31:24.279147  312675 kubeadm.go:310] 
	I0122 21:31:24.279253  312675 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:31:24.279353  312675 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:31:24.279373  312675 kubeadm.go:310] 
	I0122 21:31:24.279516  312675 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:31:24.279639  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:31:24.279754  312675 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:31:24.279837  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:31:24.279895  312675 kubeadm.go:310] 
	I0122 21:31:24.280842  312675 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:31:24.280984  312675 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:31:24.281074  312675 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:31:24.281148  312675 kubeadm.go:394] duration metric: took 7m59.138107768s to StartCluster
	I0122 21:31:24.281220  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:31:24.281302  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:31:24.331184  312675 cri.go:89] found id: ""
	I0122 21:31:24.331225  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.331235  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:31:24.331242  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:31:24.331309  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:31:24.372934  312675 cri.go:89] found id: ""
	I0122 21:31:24.372963  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.372972  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:31:24.372979  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:31:24.373034  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:31:24.413239  312675 cri.go:89] found id: ""
	I0122 21:31:24.413274  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.413284  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:31:24.413290  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:31:24.413347  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:31:24.452513  312675 cri.go:89] found id: ""
	I0122 21:31:24.452552  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.452564  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:31:24.452573  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:31:24.452644  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:31:24.491580  312675 cri.go:89] found id: ""
	I0122 21:31:24.491617  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.491629  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:31:24.491637  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:31:24.491710  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:31:24.544823  312675 cri.go:89] found id: ""
	I0122 21:31:24.544856  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.544865  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:31:24.544872  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:31:24.544930  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:31:24.585047  312675 cri.go:89] found id: ""
	I0122 21:31:24.585085  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.585099  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:31:24.585108  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:31:24.585175  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:31:24.624152  312675 cri.go:89] found id: ""
	I0122 21:31:24.624189  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.624201  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:31:24.624216  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:31:24.624231  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:31:24.717945  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:31:24.717971  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:31:24.717989  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:31:24.826216  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:31:24.826260  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:31:24.878403  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:31:24.878439  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:31:24.931058  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:31:24.931102  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0122 21:31:24.947080  312675 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0122 21:31:24.947171  312675 out.go:270] * 
	* 
	W0122 21:31:24.947310  312675 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:31:24.947331  312675 out.go:270] * 
	* 
	W0122 21:31:24.948119  312675 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 21:31:24.951080  312675 out.go:201] 
	W0122 21:31:24.952375  312675 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:31:24.952433  312675 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0122 21:31:24.952459  312675 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0122 21:31:24.954056  312675 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-181389 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 2 (280.608737ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-181389 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-806477                  | no-preload-806477            | jenkins | v1.35.0 | 22 Jan 25 21:20 UTC | 22 Jan 25 21:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-806477                                   | no-preload-806477            | jenkins | v1.35.0 | 22 Jan 25 21:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-635179                 | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-181389        | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991469       | default-k8s-diff-port-991469 | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991469 | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC |                     |
	|         | default-k8s-diff-port-991469                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-181389                              | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC | 22 Jan 25 21:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-181389             | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC | 22 Jan 25 21:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-181389                              | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-635179 image list                          | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	| delete  | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	| start   | -p newest-cni-489789 --memory=2200 --alsologtostderr   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-489789             | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-489789                  | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-489789 --memory=2200 --alsologtostderr   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-489789 image list                           | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	| delete  | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 21:27:23
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 21:27:23.911116  314650 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:27:23.911744  314650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:27:23.911765  314650 out.go:358] Setting ErrFile to fd 2...
	I0122 21:27:23.911774  314650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:27:23.912250  314650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:27:23.913222  314650 out.go:352] Setting JSON to false
	I0122 21:27:23.914762  314650 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":14990,"bootTime":1737566254,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:27:23.914894  314650 start.go:139] virtualization: kvm guest
	I0122 21:27:23.916750  314650 out.go:177] * [newest-cni-489789] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:27:23.918320  314650 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:27:23.918320  314650 notify.go:220] Checking for updates...
	I0122 21:27:23.920824  314650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:27:23.922296  314650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:23.923574  314650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:27:23.924769  314650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:27:23.926102  314650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:27:23.927578  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:23.928058  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:23.928125  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:23.944579  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34391
	I0122 21:27:23.945073  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:23.945640  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:23.945664  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:23.946073  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:23.946377  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:23.946689  314650 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:27:23.947048  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:23.947102  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:23.963420  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
	I0122 21:27:23.963873  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:23.964454  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:23.964502  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:23.964926  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:23.965154  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:24.005605  314650 out.go:177] * Using the kvm2 driver based on existing profile
	I0122 21:27:24.007129  314650 start.go:297] selected driver: kvm2
	I0122 21:27:24.007153  314650 start.go:901] validating driver "kvm2" against &{Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:24.007318  314650 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:27:24.008093  314650 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:27:24.008222  314650 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:27:24.024940  314650 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:27:24.025456  314650 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0122 21:27:24.025502  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:24.025549  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:24.025588  314650 start.go:340] cluster config:
	{Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:24.025695  314650 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:27:24.027752  314650 out.go:177] * Starting "newest-cni-489789" primary control-plane node in "newest-cni-489789" cluster
	I0122 21:27:24.029033  314650 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:27:24.029101  314650 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0122 21:27:24.029119  314650 cache.go:56] Caching tarball of preloaded images
	I0122 21:27:24.029287  314650 preload.go:172] Found /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0122 21:27:24.029306  314650 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0122 21:27:24.029475  314650 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/config.json ...
	I0122 21:27:24.029808  314650 start.go:360] acquireMachinesLock for newest-cni-489789: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:27:24.029874  314650 start.go:364] duration metric: took 34.85µs to acquireMachinesLock for "newest-cni-489789"
	I0122 21:27:24.029897  314650 start.go:96] Skipping create...Using existing machine configuration
	I0122 21:27:24.029908  314650 fix.go:54] fixHost starting: 
	I0122 21:27:24.030383  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:24.030486  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:24.046512  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I0122 21:27:24.047013  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:24.047605  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:24.047640  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:24.048047  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:24.048290  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:24.048464  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:24.050271  314650 fix.go:112] recreateIfNeeded on newest-cni-489789: state=Stopped err=<nil>
	I0122 21:27:24.050304  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	W0122 21:27:24.050473  314650 fix.go:138] unexpected machine state, will restart: <nil>
	I0122 21:27:24.052496  314650 out.go:177] * Restarting existing kvm2 VM for "newest-cni-489789" ...
	I0122 21:27:21.730303  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:21.747123  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:21.747212  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:21.793769  312675 cri.go:89] found id: ""
	I0122 21:27:21.793807  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.793827  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:21.793835  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:21.793912  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:21.840045  312675 cri.go:89] found id: ""
	I0122 21:27:21.840088  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.840101  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:21.840109  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:21.840187  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:21.885265  312675 cri.go:89] found id: ""
	I0122 21:27:21.885302  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.885314  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:21.885323  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:21.885404  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:21.937734  312675 cri.go:89] found id: ""
	I0122 21:27:21.937768  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.937777  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:21.937783  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:21.937844  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:21.989238  312675 cri.go:89] found id: ""
	I0122 21:27:21.989276  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.989294  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:21.989300  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:21.989377  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:22.035837  312675 cri.go:89] found id: ""
	I0122 21:27:22.035921  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.035934  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:22.035944  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:22.036016  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:22.091690  312675 cri.go:89] found id: ""
	I0122 21:27:22.091731  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.091745  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:22.091754  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:22.091828  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:22.149775  312675 cri.go:89] found id: ""
	I0122 21:27:22.149888  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.149913  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:22.149958  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:22.150005  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:22.213610  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:22.213665  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:22.233970  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:22.234014  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:22.318579  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:22.318606  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:22.318622  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:22.422850  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:22.422899  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:24.974063  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:24.990751  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:24.990850  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:25.036044  312675 cri.go:89] found id: ""
	I0122 21:27:25.036082  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.036094  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:25.036103  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:25.036173  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:25.078700  312675 cri.go:89] found id: ""
	I0122 21:27:25.078736  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.078748  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:25.078759  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:25.078829  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:25.134919  312675 cri.go:89] found id: ""
	I0122 21:27:25.134971  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.134984  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:25.134994  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:25.135075  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:25.183649  312675 cri.go:89] found id: ""
	I0122 21:27:25.183684  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.183695  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:25.183704  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:25.183778  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:25.240357  312675 cri.go:89] found id: ""
	I0122 21:27:25.240401  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.240414  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:25.240425  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:25.240555  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:25.284093  312675 cri.go:89] found id: ""
	I0122 21:27:25.284132  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.284141  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:25.284149  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:25.284218  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:25.328590  312675 cri.go:89] found id: ""
	I0122 21:27:25.328621  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.328632  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:25.328641  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:25.328710  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:25.378479  312675 cri.go:89] found id: ""
	I0122 21:27:25.378517  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.378529  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:25.378543  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:25.378559  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:25.433767  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:25.433800  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:24.053834  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Start
	I0122 21:27:24.054152  314650 main.go:141] libmachine: (newest-cni-489789) starting domain...
	I0122 21:27:24.054175  314650 main.go:141] libmachine: (newest-cni-489789) ensuring networks are active...
	I0122 21:27:24.055132  314650 main.go:141] libmachine: (newest-cni-489789) Ensuring network default is active
	I0122 21:27:24.055534  314650 main.go:141] libmachine: (newest-cni-489789) Ensuring network mk-newest-cni-489789 is active
	I0122 21:27:24.055963  314650 main.go:141] libmachine: (newest-cni-489789) getting domain XML...
	I0122 21:27:24.056886  314650 main.go:141] libmachine: (newest-cni-489789) creating domain...
	I0122 21:27:25.457503  314650 main.go:141] libmachine: (newest-cni-489789) waiting for IP...
	I0122 21:27:25.458754  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:25.459431  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:25.459544  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:25.459394  314684 retry.go:31] will retry after 258.579884ms: waiting for domain to come up
	I0122 21:27:25.720098  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:25.720657  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:25.720704  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:25.720649  314684 retry.go:31] will retry after 347.192205ms: waiting for domain to come up
	I0122 21:27:26.069095  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.069843  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.069880  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.069813  314684 retry.go:31] will retry after 318.422908ms: waiting for domain to come up
	I0122 21:27:26.390692  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.391374  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.391431  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.391350  314684 retry.go:31] will retry after 516.847382ms: waiting for domain to come up
	I0122 21:27:26.910252  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.910831  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.910862  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.910801  314684 retry.go:31] will retry after 657.195872ms: waiting for domain to come up
	I0122 21:27:27.569972  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:27.570617  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:27.570651  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:27.570590  314684 retry.go:31] will retry after 601.660948ms: waiting for domain to come up
	I0122 21:27:28.173427  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:28.174022  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:28.174065  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:28.173988  314684 retry.go:31] will retry after 839.292486ms: waiting for domain to come up
	I0122 21:27:25.497717  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:25.497767  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:25.530904  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:25.530961  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:25.631676  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:25.631701  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:25.631717  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:28.221852  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:28.236702  312675 kubeadm.go:597] duration metric: took 4m3.036103838s to restartPrimaryControlPlane
	W0122 21:27:28.236803  312675 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0122 21:27:28.236837  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:27:29.014929  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:29.015535  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:29.015569  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:29.015501  314684 retry.go:31] will retry after 1.28366543s: waiting for domain to come up
	I0122 21:27:30.300346  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:30.300806  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:30.300834  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:30.300775  314684 retry.go:31] will retry after 1.437378164s: waiting for domain to come up
	I0122 21:27:31.739437  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:31.740073  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:31.740106  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:31.740043  314684 retry.go:31] will retry after 1.547235719s: waiting for domain to come up
	I0122 21:27:33.289857  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:33.290395  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:33.290452  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:33.290357  314684 retry.go:31] will retry after 2.864838858s: waiting for domain to come up
	I0122 21:27:30.647940  312675 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.411072952s)
	I0122 21:27:30.648042  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:27:30.669610  312675 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:27:30.684678  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:27:30.698168  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:27:30.698232  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:27:30.698285  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:27:30.708774  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:27:30.708855  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:27:30.720213  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:27:30.731121  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:27:30.731207  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:27:30.743153  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:27:30.754160  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:27:30.754262  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:27:30.765730  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:27:30.776902  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:27:30.776990  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:27:30.788361  312675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:27:31.040925  312675 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:27:36.157916  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:36.158675  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:36.158706  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:36.158608  314684 retry.go:31] will retry after 3.253566336s: waiting for domain to come up
	I0122 21:27:39.413761  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:39.414347  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:39.414380  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:39.414310  314684 retry.go:31] will retry after 3.952766125s: waiting for domain to come up
	I0122 21:27:43.371406  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.371943  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has current primary IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.371999  314650 main.go:141] libmachine: (newest-cni-489789) found domain IP: 192.168.50.146
	I0122 21:27:43.372024  314650 main.go:141] libmachine: (newest-cni-489789) reserving static IP address...
	I0122 21:27:43.372454  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "newest-cni-489789", mac: "52:54:00:c5:b4:d9", ip: "192.168.50.146"} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.372482  314650 main.go:141] libmachine: (newest-cni-489789) DBG | skip adding static IP to network mk-newest-cni-489789 - found existing host DHCP lease matching {name: "newest-cni-489789", mac: "52:54:00:c5:b4:d9", ip: "192.168.50.146"}
	I0122 21:27:43.372502  314650 main.go:141] libmachine: (newest-cni-489789) reserved static IP address 192.168.50.146 for domain newest-cni-489789
	I0122 21:27:43.372516  314650 main.go:141] libmachine: (newest-cni-489789) waiting for SSH...
	I0122 21:27:43.372527  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Getting to WaitForSSH function...
	I0122 21:27:43.374698  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.374984  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.375016  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.375148  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Using SSH client type: external
	I0122 21:27:43.375173  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa (-rw-------)
	I0122 21:27:43.375212  314650 main.go:141] libmachine: (newest-cni-489789) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:27:43.375232  314650 main.go:141] libmachine: (newest-cni-489789) DBG | About to run SSH command:
	I0122 21:27:43.375243  314650 main.go:141] libmachine: (newest-cni-489789) DBG | exit 0
	I0122 21:27:43.503039  314650 main.go:141] libmachine: (newest-cni-489789) DBG | SSH cmd err, output: <nil>: 
	I0122 21:27:43.503449  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetConfigRaw
	I0122 21:27:43.504138  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:43.507198  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.507562  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.507607  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.507876  314650 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/config.json ...
	I0122 21:27:43.508166  314650 machine.go:93] provisionDockerMachine start ...
	I0122 21:27:43.508196  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:43.508518  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.511111  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.511408  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.511442  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.511632  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.511842  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.512002  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.512147  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.512352  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.512624  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.512643  314650 main.go:141] libmachine: About to run SSH command:
	hostname
	I0122 21:27:43.619425  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0122 21:27:43.619472  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.619742  314650 buildroot.go:166] provisioning hostname "newest-cni-489789"
	I0122 21:27:43.619772  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.619998  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.622781  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.623242  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.623285  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.623505  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.623728  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.623892  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.624013  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.624154  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.624410  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.624432  314650 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-489789 && echo "newest-cni-489789" | sudo tee /etc/hostname
	I0122 21:27:43.747575  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-489789
	
	I0122 21:27:43.747605  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.750745  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.751080  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.751127  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.751553  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.751775  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.751918  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.752035  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.752185  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.752425  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.752465  314650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-489789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-489789/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-489789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:27:43.865258  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:27:43.865290  314650 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:27:43.865312  314650 buildroot.go:174] setting up certificates
	I0122 21:27:43.865327  314650 provision.go:84] configureAuth start
	I0122 21:27:43.865362  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.865704  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:43.868648  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.868993  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.869025  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.869222  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.871572  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.871860  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.871894  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.872044  314650 provision.go:143] copyHostCerts
	I0122 21:27:43.872109  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:27:43.872130  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:27:43.872205  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:27:43.872312  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:27:43.872321  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:27:43.872346  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:27:43.872433  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:27:43.872447  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:27:43.872471  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:27:43.872536  314650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.newest-cni-489789 san=[127.0.0.1 192.168.50.146 localhost minikube newest-cni-489789]
	I0122 21:27:44.234481  314650 provision.go:177] copyRemoteCerts
	I0122 21:27:44.234579  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:27:44.234618  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.237848  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.238297  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.238332  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.238604  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.238788  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.238988  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.239154  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.326083  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:27:44.355837  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0122 21:27:44.387644  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 21:27:44.418003  314650 provision.go:87] duration metric: took 552.65522ms to configureAuth
	I0122 21:27:44.418039  314650 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:27:44.418347  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:44.418475  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.421349  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.421796  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.421839  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.422067  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.422301  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.422470  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.422603  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.422810  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:44.423129  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:44.423156  314650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:27:44.671197  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:27:44.671232  314650 machine.go:96] duration metric: took 1.163046458s to provisionDockerMachine
	I0122 21:27:44.671247  314650 start.go:293] postStartSetup for "newest-cni-489789" (driver="kvm2")
	I0122 21:27:44.671261  314650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:27:44.671289  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.671667  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:27:44.671704  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.674811  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.675137  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.675164  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.675350  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.675624  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.675817  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.675987  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.759194  314650 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:27:44.764553  314650 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:27:44.764591  314650 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:27:44.764668  314650 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:27:44.764741  314650 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:27:44.764835  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:27:44.778239  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:27:44.807409  314650 start.go:296] duration metric: took 136.131239ms for postStartSetup
	I0122 21:27:44.807474  314650 fix.go:56] duration metric: took 20.777566838s for fixHost
	I0122 21:27:44.807580  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.810883  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.811279  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.811312  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.811472  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.811736  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.811908  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.812086  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.812268  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:44.812448  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:44.812459  314650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:27:44.915903  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737581264.870208902
	
	I0122 21:27:44.915934  314650 fix.go:216] guest clock: 1737581264.870208902
	I0122 21:27:44.915945  314650 fix.go:229] Guest: 2025-01-22 21:27:44.870208902 +0000 UTC Remote: 2025-01-22 21:27:44.807479632 +0000 UTC m=+20.941890306 (delta=62.72927ms)
	I0122 21:27:44.915983  314650 fix.go:200] guest clock delta is within tolerance: 62.72927ms
	I0122 21:27:44.915991  314650 start.go:83] releasing machines lock for "newest-cni-489789", held for 20.886101347s
	I0122 21:27:44.916019  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.916292  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:44.919374  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.919795  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.919831  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.920026  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.920725  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.920966  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.921087  314650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:27:44.921144  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.921271  314650 ssh_runner.go:195] Run: cat /version.json
	I0122 21:27:44.921303  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.924275  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924511  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924546  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.924566  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924759  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.924827  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.924871  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924995  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.925090  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.925199  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.925283  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.925319  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.925420  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.925532  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:45.025072  314650 ssh_runner.go:195] Run: systemctl --version
	I0122 21:27:45.032652  314650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:27:45.187726  314650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:27:45.194767  314650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:27:45.194851  314650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:27:45.213610  314650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:27:45.213644  314650 start.go:495] detecting cgroup driver to use...
	I0122 21:27:45.213723  314650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:27:45.231803  314650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:27:45.247682  314650 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:27:45.247801  314650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:27:45.263581  314650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:27:45.279536  314650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:27:45.406663  314650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:27:45.562297  314650 docker.go:233] disabling docker service ...
	I0122 21:27:45.562383  314650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:27:45.579904  314650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:27:45.595144  314650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:27:45.739957  314650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:27:45.866024  314650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:27:45.882728  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:27:45.907297  314650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0122 21:27:45.907388  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.920271  314650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:27:45.920341  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.933095  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.945711  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.958348  314650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:27:45.972409  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.989090  314650 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:46.011819  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:46.025229  314650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:27:46.038393  314650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:27:46.038475  314650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:27:46.055252  314650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:27:46.068173  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:46.196285  314650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:27:46.295821  314650 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:27:46.295921  314650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:27:46.301506  314650 start.go:563] Will wait 60s for crictl version
	I0122 21:27:46.301587  314650 ssh_runner.go:195] Run: which crictl
	I0122 21:27:46.306074  314650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:27:46.352624  314650 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:27:46.352727  314650 ssh_runner.go:195] Run: crio --version
	I0122 21:27:46.385398  314650 ssh_runner.go:195] Run: crio --version
	I0122 21:27:46.422040  314650 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0122 21:27:46.423591  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:46.426902  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:46.427305  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:46.427332  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:46.427679  314650 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0122 21:27:46.432609  314650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:27:46.448941  314650 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0122 21:27:46.450413  314650 kubeadm.go:883] updating cluster {Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:27:46.450575  314650 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:27:46.450683  314650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:27:46.496073  314650 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0122 21:27:46.496165  314650 ssh_runner.go:195] Run: which lz4
	I0122 21:27:46.500895  314650 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:27:46.505854  314650 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:27:46.505909  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0122 21:27:48.159588  314650 crio.go:462] duration metric: took 1.658732075s to copy over tarball
	I0122 21:27:48.159687  314650 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:27:50.643587  314650 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.483861806s)
	I0122 21:27:50.643623  314650 crio.go:469] duration metric: took 2.483996867s to extract the tarball
	I0122 21:27:50.643632  314650 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:27:50.683708  314650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:27:50.732147  314650 crio.go:514] all images are preloaded for cri-o runtime.
	I0122 21:27:50.732183  314650 cache_images.go:84] Images are preloaded, skipping loading
	I0122 21:27:50.732194  314650 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.32.1 crio true true} ...
	I0122 21:27:50.732350  314650 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-489789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:27:50.732425  314650 ssh_runner.go:195] Run: crio config
	I0122 21:27:50.789877  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:50.789904  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:50.789920  314650 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0122 21:27:50.789953  314650 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-489789 NodeName:newest-cni-489789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:27:50.790132  314650 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-489789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.146"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:27:50.790261  314650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:27:50.801652  314650 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:27:50.801742  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:27:50.813168  314650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0122 21:27:50.832707  314650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:27:50.852375  314650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0122 21:27:50.875185  314650 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I0122 21:27:50.879818  314650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:27:50.893992  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:51.040056  314650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:27:51.060681  314650 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789 for IP: 192.168.50.146
	I0122 21:27:51.060711  314650 certs.go:194] generating shared ca certs ...
	I0122 21:27:51.060737  314650 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:51.060940  314650 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:27:51.061018  314650 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:27:51.061036  314650 certs.go:256] generating profile certs ...
	I0122 21:27:51.061157  314650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/client.key
	I0122 21:27:51.061251  314650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.key.de28c3d3
	I0122 21:27:51.061317  314650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.key
	I0122 21:27:51.061482  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:27:51.061526  314650 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:27:51.061539  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:27:51.061572  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:27:51.061603  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:27:51.061636  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:27:51.061692  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:27:51.062633  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:27:51.098858  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:27:51.145243  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:27:51.180019  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:27:51.208916  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0122 21:27:51.237139  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:27:51.270858  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:27:51.306734  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:27:51.341424  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:27:51.370701  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:27:51.402552  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:27:51.431817  314650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:27:51.452816  314650 ssh_runner.go:195] Run: openssl version
	I0122 21:27:51.460223  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:27:51.474716  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.480785  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.480874  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.489093  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:27:51.501870  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:27:51.514659  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.520559  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.520713  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.527928  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:27:51.541856  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:27:51.555463  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.561295  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.561368  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.568531  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:27:51.584716  314650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:27:51.590762  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 21:27:51.598592  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 21:27:51.605666  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 21:27:51.613414  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 21:27:51.621894  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 21:27:51.629916  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 21:27:51.636995  314650 kubeadm.go:392] StartCluster: {Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:51.637138  314650 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:27:51.637358  314650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:27:51.691610  314650 cri.go:89] found id: ""
	I0122 21:27:51.691683  314650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:27:51.703943  314650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0122 21:27:51.703976  314650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0122 21:27:51.704044  314650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0122 21:27:51.715920  314650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0122 21:27:51.716767  314650 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-489789" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:51.717203  314650 kubeconfig.go:62] /home/jenkins/minikube-integration/20288-247142/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-489789" cluster setting kubeconfig missing "newest-cni-489789" context setting]
	I0122 21:27:51.717901  314650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:51.729230  314650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0122 21:27:51.741794  314650 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.146
	I0122 21:27:51.741842  314650 kubeadm.go:1160] stopping kube-system containers ...
	I0122 21:27:51.741859  314650 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0122 21:27:51.741927  314650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:27:51.789068  314650 cri.go:89] found id: ""
	I0122 21:27:51.789171  314650 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0122 21:27:51.809451  314650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:27:51.821492  314650 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:27:51.821515  314650 kubeadm.go:157] found existing configuration files:
	
	I0122 21:27:51.821564  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:27:51.833428  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:27:51.833507  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:27:51.845423  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:27:51.856151  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:27:51.856247  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:27:51.868260  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:27:51.879595  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:27:51.879671  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:27:51.892482  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:27:51.905485  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:27:51.905558  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:27:51.917498  314650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:27:51.930487  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:52.072199  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.069420  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.321398  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.393577  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.471920  314650 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:27:53.472027  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:53.972577  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.472481  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.972531  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.989674  314650 api_server.go:72] duration metric: took 1.517756303s to wait for apiserver process to appear ...
	I0122 21:27:54.989707  314650 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:27:54.989729  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.208473  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:27:57.208515  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:27:57.208536  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.292726  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:27:57.292780  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:27:57.490170  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.499620  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:57.499655  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:57.990312  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.998214  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:57.998257  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:58.489875  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:58.496876  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:58.496913  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:58.990600  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:58.995909  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 200:
	ok
	I0122 21:27:59.004894  314650 api_server.go:141] control plane version: v1.32.1
	I0122 21:27:59.004943  314650 api_server.go:131] duration metric: took 4.015227175s to wait for apiserver health ...
	I0122 21:27:59.004977  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:59.004987  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:59.006689  314650 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:27:59.008029  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:27:59.020070  314650 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:27:59.044659  314650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:27:59.055648  314650 system_pods.go:59] 8 kube-system pods found
	I0122 21:27:59.055702  314650 system_pods.go:61] "coredns-668d6bf9bc-j4plt" [148d05e6-8770-4af7-bdbe-cd5a5f8dd29f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:27:59.055713  314650 system_pods.go:61] "etcd-newest-cni-489789" [c8170cf7-3a96-44e4-b00e-18d85c1b7986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:27:59.055724  314650 system_pods.go:61] "kube-apiserver-newest-cni-489789" [6ffe2038-7158-4e18-b918-97456a0a041d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:27:59.055732  314650 system_pods.go:61] "kube-controller-manager-newest-cni-489789" [b725f80f-9d41-4128-8d21-fe71b2b2279e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:27:59.055738  314650 system_pods.go:61] "kube-proxy-ln878" [010174ac-4a25-4a32-bc4b-18e7f04b94c8] Running
	I0122 21:27:59.055754  314650 system_pods.go:61] "kube-scheduler-newest-cni-489789" [3b8995ec-114b-4e51-94bf-f38ec3c2a1fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:27:59.055766  314650 system_pods.go:61] "metrics-server-f79f97bbb-hwz7d" [93786d6e-095b-4543-9a36-eb57b54ab6b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:27:59.055774  314650 system_pods.go:61] "storage-provisioner" [9d443319-6b6b-446a-a3cb-242157e85a55] Running
	I0122 21:27:59.055788  314650 system_pods.go:74] duration metric: took 11.091605ms to wait for pod list to return data ...
	I0122 21:27:59.055802  314650 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:27:59.060105  314650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:27:59.060148  314650 node_conditions.go:123] node cpu capacity is 2
	I0122 21:27:59.060164  314650 node_conditions.go:105] duration metric: took 4.355866ms to run NodePressure ...
	I0122 21:27:59.060188  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:59.384018  314650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 21:27:59.398090  314650 ops.go:34] apiserver oom_adj: -16
	I0122 21:27:59.398128  314650 kubeadm.go:597] duration metric: took 7.694142189s to restartPrimaryControlPlane
	I0122 21:27:59.398142  314650 kubeadm.go:394] duration metric: took 7.761160632s to StartCluster
	I0122 21:27:59.398170  314650 settings.go:142] acquiring lock: {Name:mkd1753661c2351dd9318eb8eab12d9164b6fe23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:59.398290  314650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:59.400046  314650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:59.400419  314650 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 21:27:59.400556  314650 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0122 21:27:59.400665  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:59.400686  314650 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-489789"
	I0122 21:27:59.400707  314650 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-489789"
	W0122 21:27:59.400716  314650 addons.go:247] addon storage-provisioner should already be in state true
	I0122 21:27:59.400726  314650 addons.go:69] Setting default-storageclass=true in profile "newest-cni-489789"
	I0122 21:27:59.400741  314650 addons.go:69] Setting dashboard=true in profile "newest-cni-489789"
	I0122 21:27:59.400761  314650 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-489789"
	I0122 21:27:59.400768  314650 addons.go:238] Setting addon dashboard=true in "newest-cni-489789"
	W0122 21:27:59.400778  314650 addons.go:247] addon dashboard should already be in state true
	I0122 21:27:59.400815  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.400765  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.401204  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401204  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401235  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.401237  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401262  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.401321  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.400718  314650 addons.go:69] Setting metrics-server=true in profile "newest-cni-489789"
	I0122 21:27:59.401464  314650 addons.go:238] Setting addon metrics-server=true in "newest-cni-489789"
	W0122 21:27:59.401475  314650 addons.go:247] addon metrics-server should already be in state true
	I0122 21:27:59.401509  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.401887  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401975  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.402025  314650 out.go:177] * Verifying Kubernetes components...
	I0122 21:27:59.403359  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:59.420697  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0122 21:27:59.420697  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0122 21:27:59.421021  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0122 21:27:59.421349  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.421459  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.421547  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.422098  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422121  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422122  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422144  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422325  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422349  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422401  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
	I0122 21:27:59.423146  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423151  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423148  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423359  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.423430  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.423817  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.423841  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.423816  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.423882  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.424405  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.425054  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.425105  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.425288  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.425335  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.427261  314650 addons.go:238] Setting addon default-storageclass=true in "newest-cni-489789"
	W0122 21:27:59.427282  314650 addons.go:247] addon default-storageclass should already be in state true
	I0122 21:27:59.427316  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.427674  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.427723  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.446713  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43103
	I0122 21:27:59.446783  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I0122 21:27:59.451272  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.451373  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.451946  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.451969  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.452101  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.452121  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.452538  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.452791  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.452801  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.453414  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.455400  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.455881  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.457716  314650 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0122 21:27:59.457751  314650 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0122 21:27:59.459475  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0122 21:27:59.459504  314650 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0122 21:27:59.459539  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.460864  314650 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0122 21:27:59.462275  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0122 21:27:59.462311  314650 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0122 21:27:59.462354  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.466673  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.467509  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.467541  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.467851  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.468096  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.468288  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.468589  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.468600  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.469258  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.469308  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.469497  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.469679  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.469875  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.470056  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.473781  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0122 21:27:59.473966  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39141
	I0122 21:27:59.474357  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.474615  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.474910  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.474936  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.475242  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.475262  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.475362  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.475908  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.475957  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.476056  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.476285  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.478535  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.480540  314650 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:27:59.481982  314650 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:27:59.482013  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 21:27:59.482045  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.485683  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.486142  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.486177  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.486465  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.486710  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.486889  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.487038  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.494246  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0122 21:27:59.494801  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.495426  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.495453  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.495905  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.496130  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.498296  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.498565  314650 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 21:27:59.498586  314650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 21:27:59.498611  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.501861  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.502313  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.502346  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.502646  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.502865  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.503077  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.503233  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.724824  314650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:27:59.770671  314650 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:27:59.770782  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:59.794707  314650 api_server.go:72] duration metric: took 394.235725ms to wait for apiserver process to appear ...
	I0122 21:27:59.794739  314650 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:27:59.794764  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:59.830916  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 200:
	ok
	I0122 21:27:59.833823  314650 api_server.go:141] control plane version: v1.32.1
	I0122 21:27:59.833866  314650 api_server.go:131] duration metric: took 39.117571ms to wait for apiserver health ...
	I0122 21:27:59.833879  314650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:27:59.842548  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:27:59.866014  314650 system_pods.go:59] 8 kube-system pods found
	I0122 21:27:59.866078  314650 system_pods.go:61] "coredns-668d6bf9bc-j4plt" [148d05e6-8770-4af7-bdbe-cd5a5f8dd29f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:27:59.866091  314650 system_pods.go:61] "etcd-newest-cni-489789" [c8170cf7-3a96-44e4-b00e-18d85c1b7986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:27:59.866103  314650 system_pods.go:61] "kube-apiserver-newest-cni-489789" [6ffe2038-7158-4e18-b918-97456a0a041d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:27:59.866113  314650 system_pods.go:61] "kube-controller-manager-newest-cni-489789" [b725f80f-9d41-4128-8d21-fe71b2b2279e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:27:59.866119  314650 system_pods.go:61] "kube-proxy-ln878" [010174ac-4a25-4a32-bc4b-18e7f04b94c8] Running
	I0122 21:27:59.866128  314650 system_pods.go:61] "kube-scheduler-newest-cni-489789" [3b8995ec-114b-4e51-94bf-f38ec3c2a1fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:27:59.866137  314650 system_pods.go:61] "metrics-server-f79f97bbb-hwz7d" [93786d6e-095b-4543-9a36-eb57b54ab6b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:27:59.866143  314650 system_pods.go:61] "storage-provisioner" [9d443319-6b6b-446a-a3cb-242157e85a55] Running
	I0122 21:27:59.866152  314650 system_pods.go:74] duration metric: took 32.265403ms to wait for pod list to return data ...
	I0122 21:27:59.866168  314650 default_sa.go:34] waiting for default service account to be created ...
	I0122 21:27:59.871064  314650 default_sa.go:45] found service account: "default"
	I0122 21:27:59.871106  314650 default_sa.go:55] duration metric: took 4.928382ms for default service account to be created ...
	I0122 21:27:59.871125  314650 kubeadm.go:582] duration metric: took 470.664674ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0122 21:27:59.871157  314650 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:27:59.875089  314650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:27:59.875125  314650 node_conditions.go:123] node cpu capacity is 2
	I0122 21:27:59.875139  314650 node_conditions.go:105] duration metric: took 3.96814ms to run NodePressure ...
	I0122 21:27:59.875155  314650 start.go:241] waiting for startup goroutines ...
	I0122 21:27:59.879100  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 21:27:59.991147  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0122 21:27:59.991183  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0122 21:28:00.010416  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0122 21:28:00.010448  314650 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0122 21:28:00.034463  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0122 21:28:00.034502  314650 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0122 21:28:00.066923  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:28:00.066963  314650 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0122 21:28:00.112671  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:28:00.155556  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0122 21:28:00.155594  314650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0122 21:28:00.224676  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0122 21:28:00.224717  314650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0122 21:28:00.402769  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0122 21:28:00.402799  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0122 21:28:00.611017  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0122 21:28:00.611060  314650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0122 21:28:00.746957  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0122 21:28:00.747012  314650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0122 21:28:00.817833  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0122 21:28:00.817864  314650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0122 21:28:00.905629  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0122 21:28:00.905658  314650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0122 21:28:00.973450  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:28:00.973488  314650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0122 21:28:01.033649  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:28:01.902642  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.023480792s)
	I0122 21:28:01.902735  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.902750  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.902850  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.060261694s)
	I0122 21:28:01.902903  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.902915  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.904921  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.904989  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.904996  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905018  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.905027  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.905036  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.905033  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905093  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.905102  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.905104  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905492  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905513  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905534  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.905540  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905567  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905581  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.914609  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.914638  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.914975  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.915021  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.915036  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.003384  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.890658634s)
	I0122 21:28:02.003466  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.003495  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.003851  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:02.003914  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.003943  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.003952  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.003960  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.004229  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.004247  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.004261  314650 addons.go:479] Verifying addon metrics-server=true in "newest-cni-489789"
	I0122 21:28:02.891241  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.857486932s)
	I0122 21:28:02.891533  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.891588  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.894087  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:02.894100  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.894130  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.894140  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.894149  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.894509  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.894564  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.896533  314650 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-489789 addons enable metrics-server
	
	I0122 21:28:02.898219  314650 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0122 21:28:02.900518  314650 addons.go:514] duration metric: took 3.499959979s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0122 21:28:02.900586  314650 start.go:246] waiting for cluster config update ...
	I0122 21:28:02.900604  314650 start.go:255] writing updated cluster config ...
	I0122 21:28:02.900904  314650 ssh_runner.go:195] Run: rm -f paused
	I0122 21:28:02.965147  314650 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0122 21:28:02.967085  314650 out.go:177] * Done! kubectl is now configured to use "newest-cni-489789" cluster and "default" namespace by default
	I0122 21:29:27.087272  312675 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:29:27.087393  312675 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:29:27.089567  312675 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:29:27.089666  312675 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:29:27.089781  312675 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:29:27.089958  312675 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:29:27.090084  312675 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:29:27.090165  312675 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:29:27.092167  312675 out.go:235]   - Generating certificates and keys ...
	I0122 21:29:27.092283  312675 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:29:27.092358  312675 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:29:27.092462  312675 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:29:27.092535  312675 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:29:27.092611  312675 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:29:27.092682  312675 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:29:27.092771  312675 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:29:27.092848  312675 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:29:27.092976  312675 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:29:27.093094  312675 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:29:27.093166  312675 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:29:27.093261  312675 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:29:27.093350  312675 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:29:27.093398  312675 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:29:27.093476  312675 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:29:27.093559  312675 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:29:27.093650  312675 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:29:27.093720  312675 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:29:27.093761  312675 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:29:27.093818  312675 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:29:27.095338  312675 out.go:235]   - Booting up control plane ...
	I0122 21:29:27.095468  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:29:27.095555  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:29:27.095632  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:29:27.095710  312675 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:29:27.095838  312675 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:29:27.095878  312675 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:29:27.095937  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096106  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096195  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096453  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096565  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096796  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096867  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.097090  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.097177  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.097367  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.097386  312675 kubeadm.go:310] 
	I0122 21:29:27.097443  312675 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:29:27.097512  312675 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:29:27.097527  312675 kubeadm.go:310] 
	I0122 21:29:27.097557  312675 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:29:27.097611  312675 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:29:27.097761  312675 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:29:27.097783  312675 kubeadm.go:310] 
	I0122 21:29:27.097878  312675 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:29:27.097928  312675 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:29:27.097955  312675 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:29:27.097962  312675 kubeadm.go:310] 
	I0122 21:29:27.098055  312675 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:29:27.098120  312675 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:29:27.098127  312675 kubeadm.go:310] 
	I0122 21:29:27.098272  312675 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:29:27.098357  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:29:27.098434  312675 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:29:27.098533  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:29:27.098585  312675 kubeadm.go:310] 
	W0122 21:29:27.098687  312675 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0122 21:29:27.098731  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:29:27.599261  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:29:27.617267  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:29:27.629164  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:29:27.629190  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:29:27.629255  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:29:27.641001  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:29:27.641072  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:29:27.653446  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:29:27.666334  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:29:27.666426  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:29:27.678551  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:29:27.689687  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:29:27.689757  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:29:27.702030  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:29:27.713507  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:29:27.713585  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:29:27.726067  312675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:29:27.816417  312675 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:29:27.816555  312675 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:29:27.995432  312675 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:29:27.995599  312675 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:29:27.995745  312675 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:29:28.218104  312675 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:29:28.220056  312675 out.go:235]   - Generating certificates and keys ...
	I0122 21:29:28.220190  312675 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:29:28.220278  312675 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:29:28.220383  312675 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:29:28.220486  312675 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:29:28.220573  312675 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:29:28.220648  312675 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:29:28.220880  312675 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:29:28.221175  312675 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:29:28.222058  312675 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:29:28.222351  312675 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:29:28.222442  312675 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:29:28.222530  312675 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:29:28.304455  312675 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:29:28.572192  312675 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:29:28.869356  312675 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:29:29.053609  312675 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:29:29.082264  312675 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:29:29.082429  312675 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:29:29.082503  312675 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:29:29.253931  312675 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:29:29.256894  312675 out.go:235]   - Booting up control plane ...
	I0122 21:29:29.257044  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:29:29.267513  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:29:29.269154  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:29:29.270276  312675 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:29:29.274228  312675 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:30:09.277116  312675 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:30:09.277238  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:09.277504  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:14.278173  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:14.278454  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:24.278945  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:24.279149  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:44.279492  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:44.279715  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:31:24.278351  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:31:24.278612  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:31:24.278628  312675 kubeadm.go:310] 
	I0122 21:31:24.278664  312675 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:31:24.278723  312675 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:31:24.278735  312675 kubeadm.go:310] 
	I0122 21:31:24.278775  312675 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:31:24.278827  312675 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:31:24.278956  312675 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:31:24.278981  312675 kubeadm.go:310] 
	I0122 21:31:24.279066  312675 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:31:24.279109  312675 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:31:24.279140  312675 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:31:24.279147  312675 kubeadm.go:310] 
	I0122 21:31:24.279253  312675 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:31:24.279353  312675 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:31:24.279373  312675 kubeadm.go:310] 
	I0122 21:31:24.279516  312675 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:31:24.279639  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:31:24.279754  312675 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:31:24.279837  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:31:24.279895  312675 kubeadm.go:310] 
	I0122 21:31:24.280842  312675 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:31:24.280984  312675 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:31:24.281074  312675 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:31:24.281148  312675 kubeadm.go:394] duration metric: took 7m59.138107768s to StartCluster
	I0122 21:31:24.281220  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:31:24.281302  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:31:24.331184  312675 cri.go:89] found id: ""
	I0122 21:31:24.331225  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.331235  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:31:24.331242  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:31:24.331309  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:31:24.372934  312675 cri.go:89] found id: ""
	I0122 21:31:24.372963  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.372972  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:31:24.372979  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:31:24.373034  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:31:24.413239  312675 cri.go:89] found id: ""
	I0122 21:31:24.413274  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.413284  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:31:24.413290  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:31:24.413347  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:31:24.452513  312675 cri.go:89] found id: ""
	I0122 21:31:24.452552  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.452564  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:31:24.452573  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:31:24.452644  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:31:24.491580  312675 cri.go:89] found id: ""
	I0122 21:31:24.491617  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.491629  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:31:24.491637  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:31:24.491710  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:31:24.544823  312675 cri.go:89] found id: ""
	I0122 21:31:24.544856  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.544865  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:31:24.544872  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:31:24.544930  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:31:24.585047  312675 cri.go:89] found id: ""
	I0122 21:31:24.585085  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.585099  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:31:24.585108  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:31:24.585175  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:31:24.624152  312675 cri.go:89] found id: ""
	I0122 21:31:24.624189  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.624201  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:31:24.624216  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:31:24.624231  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:31:24.717945  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:31:24.717971  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:31:24.717989  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:31:24.826216  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:31:24.826260  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:31:24.878403  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:31:24.878439  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:31:24.931058  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:31:24.931102  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0122 21:31:24.947080  312675 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0122 21:31:24.947171  312675 out.go:270] * 
	W0122 21:31:24.947310  312675 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:31:24.947331  312675 out.go:270] * 
	W0122 21:31:24.948119  312675 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 21:31:24.951080  312675 out.go:201] 
	W0122 21:31:24.952375  312675 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:31:24.952433  312675 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0122 21:31:24.952459  312675 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0122 21:31:24.954056  312675 out.go:201] 
	
	
	==> CRI-O <==
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.059591555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737581486059566715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4affc29-e6d7-4a67-b1f7-84984873b11a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.060200979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b649565e-1f60-4246-b3b3-d16f1226e1c1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.060277442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b649565e-1f60-4246-b3b3-d16f1226e1c1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.060321810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b649565e-1f60-4246-b3b3-d16f1226e1c1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.099246684Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41aae948-adff-4ba1-9378-59ff1a32ec84 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.099346961Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41aae948-adff-4ba1-9378-59ff1a32ec84 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.100749841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a90ab47b-043a-466b-ad90-e2cb121e6b39 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.101272003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737581486101244442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a90ab47b-043a-466b-ad90-e2cb121e6b39 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.102256354Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5de93382-0399-412b-bdc5-258e3925028d name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.102310643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5de93382-0399-412b-bdc5-258e3925028d name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.102343599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5de93382-0399-412b-bdc5-258e3925028d name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.138623818Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b44afc46-8ca0-4e57-9092-6e9052fe1027 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.138733394Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b44afc46-8ca0-4e57-9092-6e9052fe1027 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.141043601Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98b716a5-5fac-46f3-8ae7-bb79d7cf6253 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.141477952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737581486141443939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98b716a5-5fac-46f3-8ae7-bb79d7cf6253 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.142334643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccfcbe81-1c5a-42d7-89bb-4ff9794a9748 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.142439057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccfcbe81-1c5a-42d7-89bb-4ff9794a9748 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.142494425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ccfcbe81-1c5a-42d7-89bb-4ff9794a9748 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.186564648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64849aab-306f-4a5a-b6d9-5cf7ff69dc15 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.186682314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64849aab-306f-4a5a-b6d9-5cf7ff69dc15 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.187845518Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1dab34dd-8ccb-4748-bae0-6dbd74b94980 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.188286215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737581486188261007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1dab34dd-8ccb-4748-bae0-6dbd74b94980 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.188806719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=784583aa-83fa-42e5-9797-7f897296de76 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.188880872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=784583aa-83fa-42e5-9797-7f897296de76 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:31:26 old-k8s-version-181389 crio[623]: time="2025-01-22 21:31:26.188962722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=784583aa-83fa-42e5-9797-7f897296de76 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan22 21:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057641] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044754] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan22 21:23] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.204474] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.706641] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.615943] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +0.071910] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069973] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.211991] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.154871] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.286617] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +7.265364] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.070492] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.978592] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[ +12.734713] kauditd_printk_skb: 46 callbacks suppressed
	[Jan22 21:27] systemd-fstab-generator[4928]: Ignoring "noauto" option for root device
	[Jan22 21:29] systemd-fstab-generator[5204]: Ignoring "noauto" option for root device
	[  +0.083243] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:31:26 up 8 min,  0 users,  load average: 0.10, 0.11, 0.07
	Linux old-k8s-version-181389 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00023e8c0, 0xc000a79e60, 0x1, 0x0, 0x0)
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008a7340)
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]: goroutine 162 [runnable]:
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]: runtime.Gosched(...)
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]:         /usr/local/go/src/runtime/proc.go:271
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000c40780, 0x0, 0x0)
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:549 +0x1a5
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008a7340)
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 22 21:31:24 old-k8s-version-181389 kubelet[5382]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 22 21:31:24 old-k8s-version-181389 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 22 21:31:24 old-k8s-version-181389 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 22 21:31:25 old-k8s-version-181389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jan 22 21:31:25 old-k8s-version-181389 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 22 21:31:25 old-k8s-version-181389 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 22 21:31:25 old-k8s-version-181389 kubelet[5447]: I0122 21:31:25.282227    5447 server.go:416] Version: v1.20.0
	Jan 22 21:31:25 old-k8s-version-181389 kubelet[5447]: I0122 21:31:25.283226    5447 server.go:837] Client rotation is on, will bootstrap in background
	Jan 22 21:31:25 old-k8s-version-181389 kubelet[5447]: I0122 21:31:25.286525    5447 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 22 21:31:25 old-k8s-version-181389 kubelet[5447]: I0122 21:31:25.289271    5447 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jan 22 21:31:25 old-k8s-version-181389 kubelet[5447]: W0122 21:31:25.289398    5447 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181389 -n old-k8s-version-181389
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 2 (263.308717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-181389" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (511.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:31:51.117069  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:32:04.884790  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:32:26.087010  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:32:47.117251  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:33:53.344829  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:34:04.376725  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:34:10.183720  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:34:34.482677  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:34:47.846651  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:35:11.097619  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:35:50.258151  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:35:57.547497  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:36:10.909523  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:36:34.162706  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:36:51.116813  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:37:04.884777  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:37:26.086340  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:37:47.118013  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:38:14.183456  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:38:27.950888  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:38:49.152920  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:39:04.377098  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:39:34.482355  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:39:47.846087  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:40:11.098007  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181389 -n old-k8s-version-181389
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 2 (267.991439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-181389" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 2 (256.246333ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-181389 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-806477                  | no-preload-806477            | jenkins | v1.35.0 | 22 Jan 25 21:20 UTC | 22 Jan 25 21:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-806477                                   | no-preload-806477            | jenkins | v1.35.0 | 22 Jan 25 21:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-635179                 | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-181389        | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991469       | default-k8s-diff-port-991469 | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991469 | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC |                     |
	|         | default-k8s-diff-port-991469                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-181389                              | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC | 22 Jan 25 21:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-181389             | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC | 22 Jan 25 21:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-181389                              | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-635179 image list                          | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	| delete  | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	| start   | -p newest-cni-489789 --memory=2200 --alsologtostderr   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-489789             | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-489789                  | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-489789 --memory=2200 --alsologtostderr   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-489789 image list                           | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	| delete  | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 21:27:23
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 21:27:23.911116  314650 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:27:23.911744  314650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:27:23.911765  314650 out.go:358] Setting ErrFile to fd 2...
	I0122 21:27:23.911774  314650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:27:23.912250  314650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:27:23.913222  314650 out.go:352] Setting JSON to false
	I0122 21:27:23.914762  314650 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":14990,"bootTime":1737566254,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:27:23.914894  314650 start.go:139] virtualization: kvm guest
	I0122 21:27:23.916750  314650 out.go:177] * [newest-cni-489789] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:27:23.918320  314650 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:27:23.918320  314650 notify.go:220] Checking for updates...
	I0122 21:27:23.920824  314650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:27:23.922296  314650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:23.923574  314650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:27:23.924769  314650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:27:23.926102  314650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:27:23.927578  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:23.928058  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:23.928125  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:23.944579  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34391
	I0122 21:27:23.945073  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:23.945640  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:23.945664  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:23.946073  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:23.946377  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:23.946689  314650 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:27:23.947048  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:23.947102  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:23.963420  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
	I0122 21:27:23.963873  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:23.964454  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:23.964502  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:23.964926  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:23.965154  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:24.005605  314650 out.go:177] * Using the kvm2 driver based on existing profile
	I0122 21:27:24.007129  314650 start.go:297] selected driver: kvm2
	I0122 21:27:24.007153  314650 start.go:901] validating driver "kvm2" against &{Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:24.007318  314650 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:27:24.008093  314650 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:27:24.008222  314650 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:27:24.024940  314650 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:27:24.025456  314650 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0122 21:27:24.025502  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:24.025549  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:24.025588  314650 start.go:340] cluster config:
	{Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:24.025695  314650 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:27:24.027752  314650 out.go:177] * Starting "newest-cni-489789" primary control-plane node in "newest-cni-489789" cluster
	I0122 21:27:24.029033  314650 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:27:24.029101  314650 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0122 21:27:24.029119  314650 cache.go:56] Caching tarball of preloaded images
	I0122 21:27:24.029287  314650 preload.go:172] Found /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0122 21:27:24.029306  314650 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0122 21:27:24.029475  314650 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/config.json ...
	I0122 21:27:24.029808  314650 start.go:360] acquireMachinesLock for newest-cni-489789: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:27:24.029874  314650 start.go:364] duration metric: took 34.85µs to acquireMachinesLock for "newest-cni-489789"
	I0122 21:27:24.029897  314650 start.go:96] Skipping create...Using existing machine configuration
	I0122 21:27:24.029908  314650 fix.go:54] fixHost starting: 
	I0122 21:27:24.030383  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:24.030486  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:24.046512  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I0122 21:27:24.047013  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:24.047605  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:24.047640  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:24.048047  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:24.048290  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:24.048464  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:24.050271  314650 fix.go:112] recreateIfNeeded on newest-cni-489789: state=Stopped err=<nil>
	I0122 21:27:24.050304  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	W0122 21:27:24.050473  314650 fix.go:138] unexpected machine state, will restart: <nil>
	I0122 21:27:24.052496  314650 out.go:177] * Restarting existing kvm2 VM for "newest-cni-489789" ...
	I0122 21:27:21.730303  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:21.747123  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:21.747212  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:21.793769  312675 cri.go:89] found id: ""
	I0122 21:27:21.793807  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.793827  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:21.793835  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:21.793912  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:21.840045  312675 cri.go:89] found id: ""
	I0122 21:27:21.840088  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.840101  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:21.840109  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:21.840187  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:21.885265  312675 cri.go:89] found id: ""
	I0122 21:27:21.885302  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.885314  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:21.885323  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:21.885404  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:21.937734  312675 cri.go:89] found id: ""
	I0122 21:27:21.937768  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.937777  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:21.937783  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:21.937844  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:21.989238  312675 cri.go:89] found id: ""
	I0122 21:27:21.989276  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.989294  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:21.989300  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:21.989377  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:22.035837  312675 cri.go:89] found id: ""
	I0122 21:27:22.035921  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.035934  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:22.035944  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:22.036016  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:22.091690  312675 cri.go:89] found id: ""
	I0122 21:27:22.091731  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.091745  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:22.091754  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:22.091828  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:22.149775  312675 cri.go:89] found id: ""
	I0122 21:27:22.149888  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.149913  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:22.149958  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:22.150005  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:22.213610  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:22.213665  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:22.233970  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:22.234014  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:22.318579  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:22.318606  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:22.318622  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:22.422850  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:22.422899  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:24.974063  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:24.990751  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:24.990850  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:25.036044  312675 cri.go:89] found id: ""
	I0122 21:27:25.036082  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.036094  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:25.036103  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:25.036173  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:25.078700  312675 cri.go:89] found id: ""
	I0122 21:27:25.078736  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.078748  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:25.078759  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:25.078829  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:25.134919  312675 cri.go:89] found id: ""
	I0122 21:27:25.134971  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.134984  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:25.134994  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:25.135075  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:25.183649  312675 cri.go:89] found id: ""
	I0122 21:27:25.183684  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.183695  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:25.183704  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:25.183778  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:25.240357  312675 cri.go:89] found id: ""
	I0122 21:27:25.240401  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.240414  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:25.240425  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:25.240555  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:25.284093  312675 cri.go:89] found id: ""
	I0122 21:27:25.284132  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.284141  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:25.284149  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:25.284218  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:25.328590  312675 cri.go:89] found id: ""
	I0122 21:27:25.328621  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.328632  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:25.328641  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:25.328710  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:25.378479  312675 cri.go:89] found id: ""
	I0122 21:27:25.378517  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.378529  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:25.378543  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:25.378559  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:25.433767  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:25.433800  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:24.053834  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Start
	I0122 21:27:24.054152  314650 main.go:141] libmachine: (newest-cni-489789) starting domain...
	I0122 21:27:24.054175  314650 main.go:141] libmachine: (newest-cni-489789) ensuring networks are active...
	I0122 21:27:24.055132  314650 main.go:141] libmachine: (newest-cni-489789) Ensuring network default is active
	I0122 21:27:24.055534  314650 main.go:141] libmachine: (newest-cni-489789) Ensuring network mk-newest-cni-489789 is active
	I0122 21:27:24.055963  314650 main.go:141] libmachine: (newest-cni-489789) getting domain XML...
	I0122 21:27:24.056886  314650 main.go:141] libmachine: (newest-cni-489789) creating domain...
	I0122 21:27:25.457503  314650 main.go:141] libmachine: (newest-cni-489789) waiting for IP...
	I0122 21:27:25.458754  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:25.459431  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:25.459544  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:25.459394  314684 retry.go:31] will retry after 258.579884ms: waiting for domain to come up
	I0122 21:27:25.720098  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:25.720657  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:25.720704  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:25.720649  314684 retry.go:31] will retry after 347.192205ms: waiting for domain to come up
	I0122 21:27:26.069095  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.069843  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.069880  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.069813  314684 retry.go:31] will retry after 318.422908ms: waiting for domain to come up
	I0122 21:27:26.390692  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.391374  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.391431  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.391350  314684 retry.go:31] will retry after 516.847382ms: waiting for domain to come up
	I0122 21:27:26.910252  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.910831  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.910862  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.910801  314684 retry.go:31] will retry after 657.195872ms: waiting for domain to come up
	I0122 21:27:27.569972  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:27.570617  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:27.570651  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:27.570590  314684 retry.go:31] will retry after 601.660948ms: waiting for domain to come up
	I0122 21:27:28.173427  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:28.174022  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:28.174065  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:28.173988  314684 retry.go:31] will retry after 839.292486ms: waiting for domain to come up
	I0122 21:27:25.497717  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:25.497767  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:25.530904  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:25.530961  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:25.631676  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:25.631701  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:25.631717  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:28.221852  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:28.236702  312675 kubeadm.go:597] duration metric: took 4m3.036103838s to restartPrimaryControlPlane
	W0122 21:27:28.236803  312675 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0122 21:27:28.236837  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:27:29.014929  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:29.015535  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:29.015569  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:29.015501  314684 retry.go:31] will retry after 1.28366543s: waiting for domain to come up
	I0122 21:27:30.300346  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:30.300806  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:30.300834  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:30.300775  314684 retry.go:31] will retry after 1.437378164s: waiting for domain to come up
	I0122 21:27:31.739437  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:31.740073  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:31.740106  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:31.740043  314684 retry.go:31] will retry after 1.547235719s: waiting for domain to come up
	I0122 21:27:33.289857  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:33.290395  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:33.290452  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:33.290357  314684 retry.go:31] will retry after 2.864838858s: waiting for domain to come up
	I0122 21:27:30.647940  312675 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.411072952s)
	I0122 21:27:30.648042  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:27:30.669610  312675 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:27:30.684678  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:27:30.698168  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:27:30.698232  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:27:30.698285  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:27:30.708774  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:27:30.708855  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:27:30.720213  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:27:30.731121  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:27:30.731207  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:27:30.743153  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:27:30.754160  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:27:30.754262  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:27:30.765730  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:27:30.776902  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:27:30.776990  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:27:30.788361  312675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:27:31.040925  312675 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:27:36.157916  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:36.158675  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:36.158706  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:36.158608  314684 retry.go:31] will retry after 3.253566336s: waiting for domain to come up
	I0122 21:27:39.413761  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:39.414347  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:39.414380  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:39.414310  314684 retry.go:31] will retry after 3.952766125s: waiting for domain to come up
	I0122 21:27:43.371406  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.371943  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has current primary IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.371999  314650 main.go:141] libmachine: (newest-cni-489789) found domain IP: 192.168.50.146
	I0122 21:27:43.372024  314650 main.go:141] libmachine: (newest-cni-489789) reserving static IP address...
	I0122 21:27:43.372454  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "newest-cni-489789", mac: "52:54:00:c5:b4:d9", ip: "192.168.50.146"} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.372482  314650 main.go:141] libmachine: (newest-cni-489789) DBG | skip adding static IP to network mk-newest-cni-489789 - found existing host DHCP lease matching {name: "newest-cni-489789", mac: "52:54:00:c5:b4:d9", ip: "192.168.50.146"}
	I0122 21:27:43.372502  314650 main.go:141] libmachine: (newest-cni-489789) reserved static IP address 192.168.50.146 for domain newest-cni-489789
	I0122 21:27:43.372516  314650 main.go:141] libmachine: (newest-cni-489789) waiting for SSH...
	I0122 21:27:43.372527  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Getting to WaitForSSH function...
	I0122 21:27:43.374698  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.374984  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.375016  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.375148  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Using SSH client type: external
	I0122 21:27:43.375173  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa (-rw-------)
	I0122 21:27:43.375212  314650 main.go:141] libmachine: (newest-cni-489789) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:27:43.375232  314650 main.go:141] libmachine: (newest-cni-489789) DBG | About to run SSH command:
	I0122 21:27:43.375243  314650 main.go:141] libmachine: (newest-cni-489789) DBG | exit 0
	I0122 21:27:43.503039  314650 main.go:141] libmachine: (newest-cni-489789) DBG | SSH cmd err, output: <nil>: 
	I0122 21:27:43.503449  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetConfigRaw
	I0122 21:27:43.504138  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:43.507198  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.507562  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.507607  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.507876  314650 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/config.json ...
	I0122 21:27:43.508166  314650 machine.go:93] provisionDockerMachine start ...
	I0122 21:27:43.508196  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:43.508518  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.511111  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.511408  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.511442  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.511632  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.511842  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.512002  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.512147  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.512352  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.512624  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.512643  314650 main.go:141] libmachine: About to run SSH command:
	hostname
	I0122 21:27:43.619425  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0122 21:27:43.619472  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.619742  314650 buildroot.go:166] provisioning hostname "newest-cni-489789"
	I0122 21:27:43.619772  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.619998  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.622781  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.623242  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.623285  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.623505  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.623728  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.623892  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.624013  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.624154  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.624410  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.624432  314650 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-489789 && echo "newest-cni-489789" | sudo tee /etc/hostname
	I0122 21:27:43.747575  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-489789
	
	I0122 21:27:43.747605  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.750745  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.751080  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.751127  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.751553  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.751775  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.751918  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.752035  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.752185  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.752425  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.752465  314650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-489789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-489789/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-489789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:27:43.865258  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:27:43.865290  314650 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:27:43.865312  314650 buildroot.go:174] setting up certificates
	I0122 21:27:43.865327  314650 provision.go:84] configureAuth start
	I0122 21:27:43.865362  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.865704  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:43.868648  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.868993  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.869025  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.869222  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.871572  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.871860  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.871894  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.872044  314650 provision.go:143] copyHostCerts
	I0122 21:27:43.872109  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:27:43.872130  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:27:43.872205  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:27:43.872312  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:27:43.872321  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:27:43.872346  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:27:43.872433  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:27:43.872447  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:27:43.872471  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:27:43.872536  314650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.newest-cni-489789 san=[127.0.0.1 192.168.50.146 localhost minikube newest-cni-489789]
	I0122 21:27:44.234481  314650 provision.go:177] copyRemoteCerts
	I0122 21:27:44.234579  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:27:44.234618  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.237848  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.238297  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.238332  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.238604  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.238788  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.238988  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.239154  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.326083  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:27:44.355837  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0122 21:27:44.387644  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 21:27:44.418003  314650 provision.go:87] duration metric: took 552.65522ms to configureAuth
	I0122 21:27:44.418039  314650 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:27:44.418347  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:44.418475  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.421349  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.421796  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.421839  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.422067  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.422301  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.422470  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.422603  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.422810  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:44.423129  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:44.423156  314650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:27:44.671197  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:27:44.671232  314650 machine.go:96] duration metric: took 1.163046458s to provisionDockerMachine
	I0122 21:27:44.671247  314650 start.go:293] postStartSetup for "newest-cni-489789" (driver="kvm2")
	I0122 21:27:44.671261  314650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:27:44.671289  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.671667  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:27:44.671704  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.674811  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.675137  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.675164  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.675350  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.675624  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.675817  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.675987  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.759194  314650 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:27:44.764553  314650 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:27:44.764591  314650 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:27:44.764668  314650 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:27:44.764741  314650 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:27:44.764835  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:27:44.778239  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:27:44.807409  314650 start.go:296] duration metric: took 136.131239ms for postStartSetup
	I0122 21:27:44.807474  314650 fix.go:56] duration metric: took 20.777566838s for fixHost
	I0122 21:27:44.807580  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.810883  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.811279  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.811312  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.811472  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.811736  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.811908  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.812086  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.812268  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:44.812448  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:44.812459  314650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:27:44.915903  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737581264.870208902
	
	I0122 21:27:44.915934  314650 fix.go:216] guest clock: 1737581264.870208902
	I0122 21:27:44.915945  314650 fix.go:229] Guest: 2025-01-22 21:27:44.870208902 +0000 UTC Remote: 2025-01-22 21:27:44.807479632 +0000 UTC m=+20.941890306 (delta=62.72927ms)
	I0122 21:27:44.915983  314650 fix.go:200] guest clock delta is within tolerance: 62.72927ms
	I0122 21:27:44.915991  314650 start.go:83] releasing machines lock for "newest-cni-489789", held for 20.886101347s
	I0122 21:27:44.916019  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.916292  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:44.919374  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.919795  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.919831  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.920026  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.920725  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.920966  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.921087  314650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:27:44.921144  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.921271  314650 ssh_runner.go:195] Run: cat /version.json
	I0122 21:27:44.921303  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.924275  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924511  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924546  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.924566  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924759  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.924827  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.924871  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924995  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.925090  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.925199  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.925283  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.925319  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.925420  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.925532  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:45.025072  314650 ssh_runner.go:195] Run: systemctl --version
	I0122 21:27:45.032652  314650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:27:45.187726  314650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:27:45.194767  314650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:27:45.194851  314650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:27:45.213610  314650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:27:45.213644  314650 start.go:495] detecting cgroup driver to use...
	I0122 21:27:45.213723  314650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:27:45.231803  314650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:27:45.247682  314650 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:27:45.247801  314650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:27:45.263581  314650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:27:45.279536  314650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:27:45.406663  314650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:27:45.562297  314650 docker.go:233] disabling docker service ...
	I0122 21:27:45.562383  314650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:27:45.579904  314650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:27:45.595144  314650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:27:45.739957  314650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:27:45.866024  314650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:27:45.882728  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:27:45.907297  314650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0122 21:27:45.907388  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.920271  314650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:27:45.920341  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.933095  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.945711  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.958348  314650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:27:45.972409  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.989090  314650 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:46.011819  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:46.025229  314650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:27:46.038393  314650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:27:46.038475  314650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:27:46.055252  314650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:27:46.068173  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:46.196285  314650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:27:46.295821  314650 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:27:46.295921  314650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:27:46.301506  314650 start.go:563] Will wait 60s for crictl version
	I0122 21:27:46.301587  314650 ssh_runner.go:195] Run: which crictl
	I0122 21:27:46.306074  314650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:27:46.352624  314650 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:27:46.352727  314650 ssh_runner.go:195] Run: crio --version
	I0122 21:27:46.385398  314650 ssh_runner.go:195] Run: crio --version
	I0122 21:27:46.422040  314650 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0122 21:27:46.423591  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:46.426902  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:46.427305  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:46.427332  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:46.427679  314650 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0122 21:27:46.432609  314650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:27:46.448941  314650 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0122 21:27:46.450413  314650 kubeadm.go:883] updating cluster {Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:27:46.450575  314650 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:27:46.450683  314650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:27:46.496073  314650 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0122 21:27:46.496165  314650 ssh_runner.go:195] Run: which lz4
	I0122 21:27:46.500895  314650 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:27:46.505854  314650 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:27:46.505909  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0122 21:27:48.159588  314650 crio.go:462] duration metric: took 1.658732075s to copy over tarball
	I0122 21:27:48.159687  314650 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:27:50.643587  314650 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.483861806s)
	I0122 21:27:50.643623  314650 crio.go:469] duration metric: took 2.483996867s to extract the tarball
	I0122 21:27:50.643632  314650 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:27:50.683708  314650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:27:50.732147  314650 crio.go:514] all images are preloaded for cri-o runtime.
	I0122 21:27:50.732183  314650 cache_images.go:84] Images are preloaded, skipping loading
	I0122 21:27:50.732194  314650 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.32.1 crio true true} ...
	I0122 21:27:50.732350  314650 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-489789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:27:50.732425  314650 ssh_runner.go:195] Run: crio config
	I0122 21:27:50.789877  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:50.789904  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:50.789920  314650 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0122 21:27:50.789953  314650 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-489789 NodeName:newest-cni-489789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:27:50.790132  314650 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-489789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.146"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:27:50.790261  314650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:27:50.801652  314650 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:27:50.801742  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:27:50.813168  314650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0122 21:27:50.832707  314650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:27:50.852375  314650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0122 21:27:50.875185  314650 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I0122 21:27:50.879818  314650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:27:50.893992  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:51.040056  314650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:27:51.060681  314650 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789 for IP: 192.168.50.146
	I0122 21:27:51.060711  314650 certs.go:194] generating shared ca certs ...
	I0122 21:27:51.060737  314650 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:51.060940  314650 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:27:51.061018  314650 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:27:51.061036  314650 certs.go:256] generating profile certs ...
	I0122 21:27:51.061157  314650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/client.key
	I0122 21:27:51.061251  314650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.key.de28c3d3
	I0122 21:27:51.061317  314650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.key
	I0122 21:27:51.061482  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:27:51.061526  314650 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:27:51.061539  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:27:51.061572  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:27:51.061603  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:27:51.061636  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:27:51.061692  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:27:51.062633  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:27:51.098858  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:27:51.145243  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:27:51.180019  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:27:51.208916  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0122 21:27:51.237139  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:27:51.270858  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:27:51.306734  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:27:51.341424  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:27:51.370701  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:27:51.402552  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:27:51.431817  314650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:27:51.452816  314650 ssh_runner.go:195] Run: openssl version
	I0122 21:27:51.460223  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:27:51.474716  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.480785  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.480874  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.489093  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:27:51.501870  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:27:51.514659  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.520559  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.520713  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.527928  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:27:51.541856  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:27:51.555463  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.561295  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.561368  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.568531  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:27:51.584716  314650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:27:51.590762  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 21:27:51.598592  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 21:27:51.605666  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 21:27:51.613414  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 21:27:51.621894  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 21:27:51.629916  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 21:27:51.636995  314650 kubeadm.go:392] StartCluster: {Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:51.637138  314650 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:27:51.637358  314650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:27:51.691610  314650 cri.go:89] found id: ""
	I0122 21:27:51.691683  314650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:27:51.703943  314650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0122 21:27:51.703976  314650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0122 21:27:51.704044  314650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0122 21:27:51.715920  314650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0122 21:27:51.716767  314650 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-489789" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:51.717203  314650 kubeconfig.go:62] /home/jenkins/minikube-integration/20288-247142/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-489789" cluster setting kubeconfig missing "newest-cni-489789" context setting]
	I0122 21:27:51.717901  314650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:51.729230  314650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0122 21:27:51.741794  314650 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.146
	I0122 21:27:51.741842  314650 kubeadm.go:1160] stopping kube-system containers ...
	I0122 21:27:51.741859  314650 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0122 21:27:51.741927  314650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:27:51.789068  314650 cri.go:89] found id: ""
	I0122 21:27:51.789171  314650 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0122 21:27:51.809451  314650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:27:51.821492  314650 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:27:51.821515  314650 kubeadm.go:157] found existing configuration files:
	
	I0122 21:27:51.821564  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:27:51.833428  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:27:51.833507  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:27:51.845423  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:27:51.856151  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:27:51.856247  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:27:51.868260  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:27:51.879595  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:27:51.879671  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:27:51.892482  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:27:51.905485  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:27:51.905558  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:27:51.917498  314650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:27:51.930487  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:52.072199  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.069420  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.321398  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.393577  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.471920  314650 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:27:53.472027  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:53.972577  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.472481  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.972531  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.989674  314650 api_server.go:72] duration metric: took 1.517756303s to wait for apiserver process to appear ...
	I0122 21:27:54.989707  314650 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:27:54.989729  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.208473  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:27:57.208515  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:27:57.208536  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.292726  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:27:57.292780  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:27:57.490170  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.499620  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:57.499655  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:57.990312  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.998214  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:57.998257  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:58.489875  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:58.496876  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:58.496913  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:58.990600  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:58.995909  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 200:
	ok
	I0122 21:27:59.004894  314650 api_server.go:141] control plane version: v1.32.1
	I0122 21:27:59.004943  314650 api_server.go:131] duration metric: took 4.015227175s to wait for apiserver health ...
	I0122 21:27:59.004977  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:59.004987  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:59.006689  314650 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:27:59.008029  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:27:59.020070  314650 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:27:59.044659  314650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:27:59.055648  314650 system_pods.go:59] 8 kube-system pods found
	I0122 21:27:59.055702  314650 system_pods.go:61] "coredns-668d6bf9bc-j4plt" [148d05e6-8770-4af7-bdbe-cd5a5f8dd29f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:27:59.055713  314650 system_pods.go:61] "etcd-newest-cni-489789" [c8170cf7-3a96-44e4-b00e-18d85c1b7986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:27:59.055724  314650 system_pods.go:61] "kube-apiserver-newest-cni-489789" [6ffe2038-7158-4e18-b918-97456a0a041d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:27:59.055732  314650 system_pods.go:61] "kube-controller-manager-newest-cni-489789" [b725f80f-9d41-4128-8d21-fe71b2b2279e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:27:59.055738  314650 system_pods.go:61] "kube-proxy-ln878" [010174ac-4a25-4a32-bc4b-18e7f04b94c8] Running
	I0122 21:27:59.055754  314650 system_pods.go:61] "kube-scheduler-newest-cni-489789" [3b8995ec-114b-4e51-94bf-f38ec3c2a1fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:27:59.055766  314650 system_pods.go:61] "metrics-server-f79f97bbb-hwz7d" [93786d6e-095b-4543-9a36-eb57b54ab6b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:27:59.055774  314650 system_pods.go:61] "storage-provisioner" [9d443319-6b6b-446a-a3cb-242157e85a55] Running
	I0122 21:27:59.055788  314650 system_pods.go:74] duration metric: took 11.091605ms to wait for pod list to return data ...
	I0122 21:27:59.055802  314650 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:27:59.060105  314650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:27:59.060148  314650 node_conditions.go:123] node cpu capacity is 2
	I0122 21:27:59.060164  314650 node_conditions.go:105] duration metric: took 4.355866ms to run NodePressure ...
	I0122 21:27:59.060188  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:59.384018  314650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 21:27:59.398090  314650 ops.go:34] apiserver oom_adj: -16
	I0122 21:27:59.398128  314650 kubeadm.go:597] duration metric: took 7.694142189s to restartPrimaryControlPlane
	I0122 21:27:59.398142  314650 kubeadm.go:394] duration metric: took 7.761160632s to StartCluster
	I0122 21:27:59.398170  314650 settings.go:142] acquiring lock: {Name:mkd1753661c2351dd9318eb8eab12d9164b6fe23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:59.398290  314650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:59.400046  314650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:59.400419  314650 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 21:27:59.400556  314650 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0122 21:27:59.400665  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:59.400686  314650 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-489789"
	I0122 21:27:59.400707  314650 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-489789"
	W0122 21:27:59.400716  314650 addons.go:247] addon storage-provisioner should already be in state true
	I0122 21:27:59.400726  314650 addons.go:69] Setting default-storageclass=true in profile "newest-cni-489789"
	I0122 21:27:59.400741  314650 addons.go:69] Setting dashboard=true in profile "newest-cni-489789"
	I0122 21:27:59.400761  314650 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-489789"
	I0122 21:27:59.400768  314650 addons.go:238] Setting addon dashboard=true in "newest-cni-489789"
	W0122 21:27:59.400778  314650 addons.go:247] addon dashboard should already be in state true
	I0122 21:27:59.400815  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.400765  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.401204  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401204  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401235  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.401237  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401262  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.401321  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.400718  314650 addons.go:69] Setting metrics-server=true in profile "newest-cni-489789"
	I0122 21:27:59.401464  314650 addons.go:238] Setting addon metrics-server=true in "newest-cni-489789"
	W0122 21:27:59.401475  314650 addons.go:247] addon metrics-server should already be in state true
	I0122 21:27:59.401509  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.401887  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401975  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.402025  314650 out.go:177] * Verifying Kubernetes components...
	I0122 21:27:59.403359  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:59.420697  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0122 21:27:59.420697  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0122 21:27:59.421021  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0122 21:27:59.421349  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.421459  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.421547  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.422098  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422121  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422122  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422144  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422325  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422349  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422401  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
	I0122 21:27:59.423146  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423151  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423148  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423359  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.423430  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.423817  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.423841  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.423816  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.423882  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.424405  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.425054  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.425105  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.425288  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.425335  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.427261  314650 addons.go:238] Setting addon default-storageclass=true in "newest-cni-489789"
	W0122 21:27:59.427282  314650 addons.go:247] addon default-storageclass should already be in state true
	I0122 21:27:59.427316  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.427674  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.427723  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.446713  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43103
	I0122 21:27:59.446783  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I0122 21:27:59.451272  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.451373  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.451946  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.451969  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.452101  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.452121  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.452538  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.452791  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.452801  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.453414  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.455400  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.455881  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.457716  314650 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0122 21:27:59.457751  314650 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0122 21:27:59.459475  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0122 21:27:59.459504  314650 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0122 21:27:59.459539  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.460864  314650 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0122 21:27:59.462275  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0122 21:27:59.462311  314650 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0122 21:27:59.462354  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.466673  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.467509  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.467541  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.467851  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.468096  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.468288  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.468589  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.468600  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.469258  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.469308  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.469497  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.469679  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.469875  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.470056  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.473781  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0122 21:27:59.473966  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39141
	I0122 21:27:59.474357  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.474615  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.474910  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.474936  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.475242  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.475262  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.475362  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.475908  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.475957  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.476056  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.476285  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.478535  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.480540  314650 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:27:59.481982  314650 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:27:59.482013  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 21:27:59.482045  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.485683  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.486142  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.486177  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.486465  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.486710  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.486889  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.487038  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.494246  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0122 21:27:59.494801  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.495426  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.495453  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.495905  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.496130  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.498296  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.498565  314650 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 21:27:59.498586  314650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 21:27:59.498611  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.501861  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.502313  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.502346  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.502646  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.502865  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.503077  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.503233  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.724824  314650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:27:59.770671  314650 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:27:59.770782  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:59.794707  314650 api_server.go:72] duration metric: took 394.235725ms to wait for apiserver process to appear ...
	I0122 21:27:59.794739  314650 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:27:59.794764  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:59.830916  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 200:
	ok
	I0122 21:27:59.833823  314650 api_server.go:141] control plane version: v1.32.1
	I0122 21:27:59.833866  314650 api_server.go:131] duration metric: took 39.117571ms to wait for apiserver health ...
	I0122 21:27:59.833879  314650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:27:59.842548  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:27:59.866014  314650 system_pods.go:59] 8 kube-system pods found
	I0122 21:27:59.866078  314650 system_pods.go:61] "coredns-668d6bf9bc-j4plt" [148d05e6-8770-4af7-bdbe-cd5a5f8dd29f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:27:59.866091  314650 system_pods.go:61] "etcd-newest-cni-489789" [c8170cf7-3a96-44e4-b00e-18d85c1b7986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:27:59.866103  314650 system_pods.go:61] "kube-apiserver-newest-cni-489789" [6ffe2038-7158-4e18-b918-97456a0a041d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:27:59.866113  314650 system_pods.go:61] "kube-controller-manager-newest-cni-489789" [b725f80f-9d41-4128-8d21-fe71b2b2279e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:27:59.866119  314650 system_pods.go:61] "kube-proxy-ln878" [010174ac-4a25-4a32-bc4b-18e7f04b94c8] Running
	I0122 21:27:59.866128  314650 system_pods.go:61] "kube-scheduler-newest-cni-489789" [3b8995ec-114b-4e51-94bf-f38ec3c2a1fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:27:59.866137  314650 system_pods.go:61] "metrics-server-f79f97bbb-hwz7d" [93786d6e-095b-4543-9a36-eb57b54ab6b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:27:59.866143  314650 system_pods.go:61] "storage-provisioner" [9d443319-6b6b-446a-a3cb-242157e85a55] Running
	I0122 21:27:59.866152  314650 system_pods.go:74] duration metric: took 32.265403ms to wait for pod list to return data ...
	I0122 21:27:59.866168  314650 default_sa.go:34] waiting for default service account to be created ...
	I0122 21:27:59.871064  314650 default_sa.go:45] found service account: "default"
	I0122 21:27:59.871106  314650 default_sa.go:55] duration metric: took 4.928382ms for default service account to be created ...
	I0122 21:27:59.871125  314650 kubeadm.go:582] duration metric: took 470.664674ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0122 21:27:59.871157  314650 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:27:59.875089  314650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:27:59.875125  314650 node_conditions.go:123] node cpu capacity is 2
	I0122 21:27:59.875139  314650 node_conditions.go:105] duration metric: took 3.96814ms to run NodePressure ...
	I0122 21:27:59.875155  314650 start.go:241] waiting for startup goroutines ...
	I0122 21:27:59.879100  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 21:27:59.991147  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0122 21:27:59.991183  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0122 21:28:00.010416  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0122 21:28:00.010448  314650 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0122 21:28:00.034463  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0122 21:28:00.034502  314650 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0122 21:28:00.066923  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:28:00.066963  314650 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0122 21:28:00.112671  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:28:00.155556  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0122 21:28:00.155594  314650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0122 21:28:00.224676  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0122 21:28:00.224717  314650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0122 21:28:00.402769  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0122 21:28:00.402799  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0122 21:28:00.611017  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0122 21:28:00.611060  314650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0122 21:28:00.746957  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0122 21:28:00.747012  314650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0122 21:28:00.817833  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0122 21:28:00.817864  314650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0122 21:28:00.905629  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0122 21:28:00.905658  314650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0122 21:28:00.973450  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:28:00.973488  314650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0122 21:28:01.033649  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:28:01.902642  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.023480792s)
	I0122 21:28:01.902735  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.902750  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.902850  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.060261694s)
	I0122 21:28:01.902903  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.902915  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.904921  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.904989  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.904996  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905018  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.905027  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.905036  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.905033  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905093  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.905102  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.905104  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905492  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905513  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905534  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.905540  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905567  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905581  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.914609  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.914638  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.914975  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.915021  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.915036  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.003384  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.890658634s)
	I0122 21:28:02.003466  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.003495  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.003851  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:02.003914  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.003943  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.003952  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.003960  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.004229  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.004247  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.004261  314650 addons.go:479] Verifying addon metrics-server=true in "newest-cni-489789"
	I0122 21:28:02.891241  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.857486932s)
	I0122 21:28:02.891533  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.891588  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.894087  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:02.894100  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.894130  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.894140  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.894149  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.894509  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.894564  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.896533  314650 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-489789 addons enable metrics-server
	
	I0122 21:28:02.898219  314650 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0122 21:28:02.900518  314650 addons.go:514] duration metric: took 3.499959979s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0122 21:28:02.900586  314650 start.go:246] waiting for cluster config update ...
	I0122 21:28:02.900604  314650 start.go:255] writing updated cluster config ...
	I0122 21:28:02.900904  314650 ssh_runner.go:195] Run: rm -f paused
	I0122 21:28:02.965147  314650 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0122 21:28:02.967085  314650 out.go:177] * Done! kubectl is now configured to use "newest-cni-489789" cluster and "default" namespace by default
	I0122 21:29:27.087272  312675 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:29:27.087393  312675 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:29:27.089567  312675 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:29:27.089666  312675 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:29:27.089781  312675 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:29:27.089958  312675 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:29:27.090084  312675 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:29:27.090165  312675 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:29:27.092167  312675 out.go:235]   - Generating certificates and keys ...
	I0122 21:29:27.092283  312675 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:29:27.092358  312675 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:29:27.092462  312675 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:29:27.092535  312675 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:29:27.092611  312675 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:29:27.092682  312675 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:29:27.092771  312675 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:29:27.092848  312675 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:29:27.092976  312675 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:29:27.093094  312675 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:29:27.093166  312675 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:29:27.093261  312675 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:29:27.093350  312675 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:29:27.093398  312675 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:29:27.093476  312675 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:29:27.093559  312675 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:29:27.093650  312675 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:29:27.093720  312675 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:29:27.093761  312675 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:29:27.093818  312675 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:29:27.095338  312675 out.go:235]   - Booting up control plane ...
	I0122 21:29:27.095468  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:29:27.095555  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:29:27.095632  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:29:27.095710  312675 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:29:27.095838  312675 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:29:27.095878  312675 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:29:27.095937  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096106  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096195  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096453  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096565  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096796  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096867  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.097090  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.097177  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.097367  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.097386  312675 kubeadm.go:310] 
	I0122 21:29:27.097443  312675 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:29:27.097512  312675 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:29:27.097527  312675 kubeadm.go:310] 
	I0122 21:29:27.097557  312675 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:29:27.097611  312675 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:29:27.097761  312675 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:29:27.097783  312675 kubeadm.go:310] 
	I0122 21:29:27.097878  312675 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:29:27.097928  312675 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:29:27.097955  312675 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:29:27.097962  312675 kubeadm.go:310] 
	I0122 21:29:27.098055  312675 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:29:27.098120  312675 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:29:27.098127  312675 kubeadm.go:310] 
	I0122 21:29:27.098272  312675 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:29:27.098357  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:29:27.098434  312675 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:29:27.098533  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:29:27.098585  312675 kubeadm.go:310] 
	W0122 21:29:27.098687  312675 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0122 21:29:27.098731  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:29:27.599261  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:29:27.617267  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:29:27.629164  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:29:27.629190  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:29:27.629255  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:29:27.641001  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:29:27.641072  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:29:27.653446  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:29:27.666334  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:29:27.666426  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:29:27.678551  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:29:27.689687  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:29:27.689757  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:29:27.702030  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:29:27.713507  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:29:27.713585  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:29:27.726067  312675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:29:27.816417  312675 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:29:27.816555  312675 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:29:27.995432  312675 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:29:27.995599  312675 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:29:27.995745  312675 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:29:28.218104  312675 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:29:28.220056  312675 out.go:235]   - Generating certificates and keys ...
	I0122 21:29:28.220190  312675 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:29:28.220278  312675 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:29:28.220383  312675 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:29:28.220486  312675 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:29:28.220573  312675 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:29:28.220648  312675 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:29:28.220880  312675 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:29:28.221175  312675 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:29:28.222058  312675 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:29:28.222351  312675 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:29:28.222442  312675 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:29:28.222530  312675 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:29:28.304455  312675 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:29:28.572192  312675 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:29:28.869356  312675 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:29:29.053609  312675 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:29:29.082264  312675 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:29:29.082429  312675 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:29:29.082503  312675 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:29:29.253931  312675 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:29:29.256894  312675 out.go:235]   - Booting up control plane ...
	I0122 21:29:29.257044  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:29:29.267513  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:29:29.269154  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:29:29.270276  312675 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:29:29.274228  312675 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:30:09.277116  312675 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:30:09.277238  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:09.277504  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:14.278173  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:14.278454  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:24.278945  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:24.279149  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:44.279492  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:44.279715  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:31:24.278351  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:31:24.278612  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:31:24.278628  312675 kubeadm.go:310] 
	I0122 21:31:24.278664  312675 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:31:24.278723  312675 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:31:24.278735  312675 kubeadm.go:310] 
	I0122 21:31:24.278775  312675 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:31:24.278827  312675 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:31:24.278956  312675 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:31:24.278981  312675 kubeadm.go:310] 
	I0122 21:31:24.279066  312675 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:31:24.279109  312675 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:31:24.279140  312675 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:31:24.279147  312675 kubeadm.go:310] 
	I0122 21:31:24.279253  312675 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:31:24.279353  312675 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:31:24.279373  312675 kubeadm.go:310] 
	I0122 21:31:24.279516  312675 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:31:24.279639  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:31:24.279754  312675 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:31:24.279837  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:31:24.279895  312675 kubeadm.go:310] 
	I0122 21:31:24.280842  312675 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:31:24.280984  312675 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:31:24.281074  312675 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:31:24.281148  312675 kubeadm.go:394] duration metric: took 7m59.138107768s to StartCluster
	I0122 21:31:24.281220  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:31:24.281302  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:31:24.331184  312675 cri.go:89] found id: ""
	I0122 21:31:24.331225  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.331235  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:31:24.331242  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:31:24.331309  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:31:24.372934  312675 cri.go:89] found id: ""
	I0122 21:31:24.372963  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.372972  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:31:24.372979  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:31:24.373034  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:31:24.413239  312675 cri.go:89] found id: ""
	I0122 21:31:24.413274  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.413284  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:31:24.413290  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:31:24.413347  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:31:24.452513  312675 cri.go:89] found id: ""
	I0122 21:31:24.452552  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.452564  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:31:24.452573  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:31:24.452644  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:31:24.491580  312675 cri.go:89] found id: ""
	I0122 21:31:24.491617  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.491629  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:31:24.491637  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:31:24.491710  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:31:24.544823  312675 cri.go:89] found id: ""
	I0122 21:31:24.544856  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.544865  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:31:24.544872  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:31:24.544930  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:31:24.585047  312675 cri.go:89] found id: ""
	I0122 21:31:24.585085  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.585099  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:31:24.585108  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:31:24.585175  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:31:24.624152  312675 cri.go:89] found id: ""
	I0122 21:31:24.624189  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.624201  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:31:24.624216  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:31:24.624231  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:31:24.717945  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:31:24.717971  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:31:24.717989  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:31:24.826216  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:31:24.826260  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:31:24.878403  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:31:24.878439  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:31:24.931058  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:31:24.931102  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0122 21:31:24.947080  312675 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0122 21:31:24.947171  312675 out.go:270] * 
	W0122 21:31:24.947310  312675 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:31:24.947331  312675 out.go:270] * 
	W0122 21:31:24.948119  312675 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 21:31:24.951080  312675 out.go:201] 
	W0122 21:31:24.952375  312675 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:31:24.952433  312675 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0122 21:31:24.952459  312675 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0122 21:31:24.954056  312675 out.go:201] 
	
	
	==> CRI-O <==
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.741282968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582027741259933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b622608-b0e7-450e-9c57-da7c9ccdba42 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.742124116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56fb3258-a65e-4037-8c09-ac93a80c3ad6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.742183918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56fb3258-a65e-4037-8c09-ac93a80c3ad6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.742240271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=56fb3258-a65e-4037-8c09-ac93a80c3ad6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.782103250Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=492e6faf-dcf0-40e4-9d45-9bb90df68736 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.782175305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=492e6faf-dcf0-40e4-9d45-9bb90df68736 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.783666775Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5576057-7458-4cc2-91da-0136afa4c52b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.784166108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582027784134014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5576057-7458-4cc2-91da-0136afa4c52b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.785182689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccbf4279-855e-4195-b681-0b582948c9af name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.785265297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccbf4279-855e-4195-b681-0b582948c9af name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.785312279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ccbf4279-855e-4195-b681-0b582948c9af name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.825173177Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9e8ca84-0988-4c07-8fcc-3a5bd9fd6a1d name=/runtime.v1.RuntimeService/Version
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.825282501Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9e8ca84-0988-4c07-8fcc-3a5bd9fd6a1d name=/runtime.v1.RuntimeService/Version
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.827115323Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55c4cc41-b3ea-4183-9665-c80cd0052d54 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.827532088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582027827507025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55c4cc41-b3ea-4183-9665-c80cd0052d54 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.828627413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b194082b-6a3b-471c-b0c4-e709e8918ba9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.828718629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b194082b-6a3b-471c-b0c4-e709e8918ba9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.828772175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b194082b-6a3b-471c-b0c4-e709e8918ba9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.868393504Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8481352-e79c-4c7c-b170-d327d801e3a5 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.868479282Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8481352-e79c-4c7c-b170-d327d801e3a5 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.870532395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4143f193-f998-4626-be02-9c0980085ef1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.871102112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582027871066146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4143f193-f998-4626-be02-9c0980085ef1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.871897030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54836c6f-bf51-4885-93d8-95c1183e691b name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.872059756Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54836c6f-bf51-4885-93d8-95c1183e691b name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:40:27 old-k8s-version-181389 crio[623]: time="2025-01-22 21:40:27.872104427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=54836c6f-bf51-4885-93d8-95c1183e691b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan22 21:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057641] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044754] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan22 21:23] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.204474] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.706641] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.615943] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +0.071910] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069973] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.211991] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.154871] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.286617] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +7.265364] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.070492] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.978592] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[ +12.734713] kauditd_printk_skb: 46 callbacks suppressed
	[Jan22 21:27] systemd-fstab-generator[4928]: Ignoring "noauto" option for root device
	[Jan22 21:29] systemd-fstab-generator[5204]: Ignoring "noauto" option for root device
	[  +0.083243] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:40:28 up 17 min,  0 users,  load average: 0.00, 0.03, 0.05
	Linux old-k8s-version-181389 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0002a0a80)
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]: goroutine 162 [syscall]:
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]: syscall.Syscall6(0xe8, 0xe, 0xc000c09b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xe, 0xc000c09b6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc00076d900, 0x0, 0x0, 0x0)
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000119220)
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jan 22 21:40:24 old-k8s-version-181389 kubelet[6379]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jan 22 21:40:24 old-k8s-version-181389 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 22 21:40:24 old-k8s-version-181389 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 22 21:40:25 old-k8s-version-181389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 113.
	Jan 22 21:40:25 old-k8s-version-181389 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 22 21:40:25 old-k8s-version-181389 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 22 21:40:25 old-k8s-version-181389 kubelet[6388]: I0122 21:40:25.496606    6388 server.go:416] Version: v1.20.0
	Jan 22 21:40:25 old-k8s-version-181389 kubelet[6388]: I0122 21:40:25.496905    6388 server.go:837] Client rotation is on, will bootstrap in background
	Jan 22 21:40:25 old-k8s-version-181389 kubelet[6388]: I0122 21:40:25.499091    6388 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 22 21:40:25 old-k8s-version-181389 kubelet[6388]: W0122 21:40:25.500028    6388 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 22 21:40:25 old-k8s-version-181389 kubelet[6388]: I0122 21:40:25.500301    6388 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181389 -n old-k8s-version-181389
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 2 (261.070818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-181389" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (360.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:40:50.257828  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:41:51.117092  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:42:04.884898  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:42:07.452522  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:42:26.086388  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:42:47.117181  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:44:04.376498  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:44:34.482531  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:44:47.846551  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:45:11.097684  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
E0122 21:45:50.257347  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.222:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.222:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181389 -n old-k8s-version-181389
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 2 (263.59582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-181389" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-181389 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-181389 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (3.368µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-181389 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 2 (252.632515ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-181389 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-806477                  | no-preload-806477            | jenkins | v1.35.0 | 22 Jan 25 21:20 UTC | 22 Jan 25 21:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-806477                                   | no-preload-806477            | jenkins | v1.35.0 | 22 Jan 25 21:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-635179                 | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-181389        | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-991469       | default-k8s-diff-port-991469 | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC | 22 Jan 25 21:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991469 | jenkins | v1.35.0 | 22 Jan 25 21:21 UTC |                     |
	|         | default-k8s-diff-port-991469                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-181389                              | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC | 22 Jan 25 21:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-181389             | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC | 22 Jan 25 21:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-181389                              | old-k8s-version-181389       | jenkins | v1.35.0 | 22 Jan 25 21:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-635179 image list                          | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	| delete  | -p embed-certs-635179                                  | embed-certs-635179           | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:26 UTC |
	| start   | -p newest-cni-489789 --memory=2200 --alsologtostderr   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:26 UTC | 22 Jan 25 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-489789             | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-489789                  | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-489789 --memory=2200 --alsologtostderr   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:27 UTC | 22 Jan 25 21:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-489789 image list                           | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	| delete  | -p newest-cni-489789                                   | newest-cni-489789            | jenkins | v1.35.0 | 22 Jan 25 21:28 UTC | 22 Jan 25 21:28 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 21:27:23
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 21:27:23.911116  314650 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:27:23.911744  314650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:27:23.911765  314650 out.go:358] Setting ErrFile to fd 2...
	I0122 21:27:23.911774  314650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:27:23.912250  314650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:27:23.913222  314650 out.go:352] Setting JSON to false
	I0122 21:27:23.914762  314650 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":14990,"bootTime":1737566254,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:27:23.914894  314650 start.go:139] virtualization: kvm guest
	I0122 21:27:23.916750  314650 out.go:177] * [newest-cni-489789] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:27:23.918320  314650 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:27:23.918320  314650 notify.go:220] Checking for updates...
	I0122 21:27:23.920824  314650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:27:23.922296  314650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:23.923574  314650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:27:23.924769  314650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:27:23.926102  314650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:27:23.927578  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:23.928058  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:23.928125  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:23.944579  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34391
	I0122 21:27:23.945073  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:23.945640  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:23.945664  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:23.946073  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:23.946377  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:23.946689  314650 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:27:23.947048  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:23.947102  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:23.963420  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
	I0122 21:27:23.963873  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:23.964454  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:23.964502  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:23.964926  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:23.965154  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:24.005605  314650 out.go:177] * Using the kvm2 driver based on existing profile
	I0122 21:27:24.007129  314650 start.go:297] selected driver: kvm2
	I0122 21:27:24.007153  314650 start.go:901] validating driver "kvm2" against &{Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:24.007318  314650 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:27:24.008093  314650 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:27:24.008222  314650 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:27:24.024940  314650 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:27:24.025456  314650 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0122 21:27:24.025502  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:24.025549  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:24.025588  314650 start.go:340] cluster config:
	{Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:24.025695  314650 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:27:24.027752  314650 out.go:177] * Starting "newest-cni-489789" primary control-plane node in "newest-cni-489789" cluster
	I0122 21:27:24.029033  314650 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:27:24.029101  314650 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0122 21:27:24.029119  314650 cache.go:56] Caching tarball of preloaded images
	I0122 21:27:24.029287  314650 preload.go:172] Found /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0122 21:27:24.029306  314650 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0122 21:27:24.029475  314650 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/config.json ...
	I0122 21:27:24.029808  314650 start.go:360] acquireMachinesLock for newest-cni-489789: {Name:mkd3ee07afa7e80b6bcd139f15d206bc8a587a99 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:27:24.029874  314650 start.go:364] duration metric: took 34.85µs to acquireMachinesLock for "newest-cni-489789"
	I0122 21:27:24.029897  314650 start.go:96] Skipping create...Using existing machine configuration
	I0122 21:27:24.029908  314650 fix.go:54] fixHost starting: 
	I0122 21:27:24.030383  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:24.030486  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:24.046512  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I0122 21:27:24.047013  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:24.047605  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:24.047640  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:24.048047  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:24.048290  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:24.048464  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:24.050271  314650 fix.go:112] recreateIfNeeded on newest-cni-489789: state=Stopped err=<nil>
	I0122 21:27:24.050304  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	W0122 21:27:24.050473  314650 fix.go:138] unexpected machine state, will restart: <nil>
	I0122 21:27:24.052496  314650 out.go:177] * Restarting existing kvm2 VM for "newest-cni-489789" ...
	I0122 21:27:21.730303  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:21.747123  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:21.747212  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:21.793769  312675 cri.go:89] found id: ""
	I0122 21:27:21.793807  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.793827  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:21.793835  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:21.793912  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:21.840045  312675 cri.go:89] found id: ""
	I0122 21:27:21.840088  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.840101  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:21.840109  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:21.840187  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:21.885265  312675 cri.go:89] found id: ""
	I0122 21:27:21.885302  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.885314  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:21.885323  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:21.885404  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:21.937734  312675 cri.go:89] found id: ""
	I0122 21:27:21.937768  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.937777  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:21.937783  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:21.937844  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:21.989238  312675 cri.go:89] found id: ""
	I0122 21:27:21.989276  312675 logs.go:282] 0 containers: []
	W0122 21:27:21.989294  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:21.989300  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:21.989377  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:22.035837  312675 cri.go:89] found id: ""
	I0122 21:27:22.035921  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.035934  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:22.035944  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:22.036016  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:22.091690  312675 cri.go:89] found id: ""
	I0122 21:27:22.091731  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.091745  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:22.091754  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:22.091828  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:22.149775  312675 cri.go:89] found id: ""
	I0122 21:27:22.149888  312675 logs.go:282] 0 containers: []
	W0122 21:27:22.149913  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:22.149958  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:22.150005  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:22.213610  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:22.213665  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:22.233970  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:22.234014  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:22.318579  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:22.318606  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:22.318622  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:22.422850  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:22.422899  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:24.974063  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:24.990751  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:27:24.990850  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:27:25.036044  312675 cri.go:89] found id: ""
	I0122 21:27:25.036082  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.036094  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:27:25.036103  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:27:25.036173  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:27:25.078700  312675 cri.go:89] found id: ""
	I0122 21:27:25.078736  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.078748  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:27:25.078759  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:27:25.078829  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:27:25.134919  312675 cri.go:89] found id: ""
	I0122 21:27:25.134971  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.134984  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:27:25.134994  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:27:25.135075  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:27:25.183649  312675 cri.go:89] found id: ""
	I0122 21:27:25.183684  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.183695  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:27:25.183704  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:27:25.183778  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:27:25.240357  312675 cri.go:89] found id: ""
	I0122 21:27:25.240401  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.240414  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:27:25.240425  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:27:25.240555  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:27:25.284093  312675 cri.go:89] found id: ""
	I0122 21:27:25.284132  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.284141  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:27:25.284149  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:27:25.284218  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:27:25.328590  312675 cri.go:89] found id: ""
	I0122 21:27:25.328621  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.328632  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:27:25.328641  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:27:25.328710  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:27:25.378479  312675 cri.go:89] found id: ""
	I0122 21:27:25.378517  312675 logs.go:282] 0 containers: []
	W0122 21:27:25.378529  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:27:25.378543  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:27:25.378559  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:27:25.433767  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:27:25.433800  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:27:24.053834  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Start
	I0122 21:27:24.054152  314650 main.go:141] libmachine: (newest-cni-489789) starting domain...
	I0122 21:27:24.054175  314650 main.go:141] libmachine: (newest-cni-489789) ensuring networks are active...
	I0122 21:27:24.055132  314650 main.go:141] libmachine: (newest-cni-489789) Ensuring network default is active
	I0122 21:27:24.055534  314650 main.go:141] libmachine: (newest-cni-489789) Ensuring network mk-newest-cni-489789 is active
	I0122 21:27:24.055963  314650 main.go:141] libmachine: (newest-cni-489789) getting domain XML...
	I0122 21:27:24.056886  314650 main.go:141] libmachine: (newest-cni-489789) creating domain...
	I0122 21:27:25.457503  314650 main.go:141] libmachine: (newest-cni-489789) waiting for IP...
	I0122 21:27:25.458754  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:25.459431  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:25.459544  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:25.459394  314684 retry.go:31] will retry after 258.579884ms: waiting for domain to come up
	I0122 21:27:25.720098  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:25.720657  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:25.720704  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:25.720649  314684 retry.go:31] will retry after 347.192205ms: waiting for domain to come up
	I0122 21:27:26.069095  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.069843  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.069880  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.069813  314684 retry.go:31] will retry after 318.422908ms: waiting for domain to come up
	I0122 21:27:26.390692  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.391374  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.391431  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.391350  314684 retry.go:31] will retry after 516.847382ms: waiting for domain to come up
	I0122 21:27:26.910252  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:26.910831  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:26.910862  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:26.910801  314684 retry.go:31] will retry after 657.195872ms: waiting for domain to come up
	I0122 21:27:27.569972  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:27.570617  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:27.570651  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:27.570590  314684 retry.go:31] will retry after 601.660948ms: waiting for domain to come up
	I0122 21:27:28.173427  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:28.174022  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:28.174065  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:28.173988  314684 retry.go:31] will retry after 839.292486ms: waiting for domain to come up
	I0122 21:27:25.497717  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:27:25.497767  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0122 21:27:25.530904  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:27:25.530961  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:27:25.631676  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:27:25.631701  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:27:25.631717  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:27:28.221852  312675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:28.236702  312675 kubeadm.go:597] duration metric: took 4m3.036103838s to restartPrimaryControlPlane
	W0122 21:27:28.236803  312675 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0122 21:27:28.236837  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:27:29.014929  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:29.015535  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:29.015569  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:29.015501  314684 retry.go:31] will retry after 1.28366543s: waiting for domain to come up
	I0122 21:27:30.300346  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:30.300806  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:30.300834  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:30.300775  314684 retry.go:31] will retry after 1.437378164s: waiting for domain to come up
	I0122 21:27:31.739437  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:31.740073  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:31.740106  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:31.740043  314684 retry.go:31] will retry after 1.547235719s: waiting for domain to come up
	I0122 21:27:33.289857  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:33.290395  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:33.290452  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:33.290357  314684 retry.go:31] will retry after 2.864838858s: waiting for domain to come up
	I0122 21:27:30.647940  312675 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.411072952s)
	I0122 21:27:30.648042  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:27:30.669610  312675 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:27:30.684678  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:27:30.698168  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:27:30.698232  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:27:30.698285  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:27:30.708774  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:27:30.708855  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:27:30.720213  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:27:30.731121  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:27:30.731207  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:27:30.743153  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:27:30.754160  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:27:30.754262  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:27:30.765730  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:27:30.776902  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:27:30.776990  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:27:30.788361  312675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:27:31.040925  312675 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:27:36.157916  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:36.158675  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:36.158706  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:36.158608  314684 retry.go:31] will retry after 3.253566336s: waiting for domain to come up
	I0122 21:27:39.413761  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:39.414347  314650 main.go:141] libmachine: (newest-cni-489789) DBG | unable to find current IP address of domain newest-cni-489789 in network mk-newest-cni-489789
	I0122 21:27:39.414380  314650 main.go:141] libmachine: (newest-cni-489789) DBG | I0122 21:27:39.414310  314684 retry.go:31] will retry after 3.952766125s: waiting for domain to come up
	I0122 21:27:43.371406  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.371943  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has current primary IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.371999  314650 main.go:141] libmachine: (newest-cni-489789) found domain IP: 192.168.50.146
	I0122 21:27:43.372024  314650 main.go:141] libmachine: (newest-cni-489789) reserving static IP address...
	I0122 21:27:43.372454  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "newest-cni-489789", mac: "52:54:00:c5:b4:d9", ip: "192.168.50.146"} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.372482  314650 main.go:141] libmachine: (newest-cni-489789) DBG | skip adding static IP to network mk-newest-cni-489789 - found existing host DHCP lease matching {name: "newest-cni-489789", mac: "52:54:00:c5:b4:d9", ip: "192.168.50.146"}
	I0122 21:27:43.372502  314650 main.go:141] libmachine: (newest-cni-489789) reserved static IP address 192.168.50.146 for domain newest-cni-489789
	I0122 21:27:43.372516  314650 main.go:141] libmachine: (newest-cni-489789) waiting for SSH...
	I0122 21:27:43.372527  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Getting to WaitForSSH function...
	I0122 21:27:43.374698  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.374984  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.375016  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.375148  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Using SSH client type: external
	I0122 21:27:43.375173  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa (-rw-------)
	I0122 21:27:43.375212  314650 main.go:141] libmachine: (newest-cni-489789) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:27:43.375232  314650 main.go:141] libmachine: (newest-cni-489789) DBG | About to run SSH command:
	I0122 21:27:43.375243  314650 main.go:141] libmachine: (newest-cni-489789) DBG | exit 0
	I0122 21:27:43.503039  314650 main.go:141] libmachine: (newest-cni-489789) DBG | SSH cmd err, output: <nil>: 
	I0122 21:27:43.503449  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetConfigRaw
	I0122 21:27:43.504138  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:43.507198  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.507562  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.507607  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.507876  314650 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/config.json ...
	I0122 21:27:43.508166  314650 machine.go:93] provisionDockerMachine start ...
	I0122 21:27:43.508196  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:43.508518  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.511111  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.511408  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.511442  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.511632  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.511842  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.512002  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.512147  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.512352  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.512624  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.512643  314650 main.go:141] libmachine: About to run SSH command:
	hostname
	I0122 21:27:43.619425  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0122 21:27:43.619472  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.619742  314650 buildroot.go:166] provisioning hostname "newest-cni-489789"
	I0122 21:27:43.619772  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.619998  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.622781  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.623242  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.623285  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.623505  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.623728  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.623892  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.624013  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.624154  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.624410  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.624432  314650 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-489789 && echo "newest-cni-489789" | sudo tee /etc/hostname
	I0122 21:27:43.747575  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-489789
	
	I0122 21:27:43.747605  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.750745  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.751080  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.751127  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.751553  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:43.751775  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.751918  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:43.752035  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:43.752185  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:43.752425  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:43.752465  314650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-489789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-489789/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-489789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:27:43.865258  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:27:43.865290  314650 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-247142/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-247142/.minikube}
	I0122 21:27:43.865312  314650 buildroot.go:174] setting up certificates
	I0122 21:27:43.865327  314650 provision.go:84] configureAuth start
	I0122 21:27:43.865362  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetMachineName
	I0122 21:27:43.865704  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:43.868648  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.868993  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.869025  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.869222  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:43.871572  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.871860  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:43.871894  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:43.872044  314650 provision.go:143] copyHostCerts
	I0122 21:27:43.872109  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem, removing ...
	I0122 21:27:43.872130  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem
	I0122 21:27:43.872205  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/ca.pem (1082 bytes)
	I0122 21:27:43.872312  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem, removing ...
	I0122 21:27:43.872321  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem
	I0122 21:27:43.872346  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/cert.pem (1123 bytes)
	I0122 21:27:43.872433  314650 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem, removing ...
	I0122 21:27:43.872447  314650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem
	I0122 21:27:43.872471  314650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-247142/.minikube/key.pem (1675 bytes)
	I0122 21:27:43.872536  314650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem org=jenkins.newest-cni-489789 san=[127.0.0.1 192.168.50.146 localhost minikube newest-cni-489789]
	I0122 21:27:44.234481  314650 provision.go:177] copyRemoteCerts
	I0122 21:27:44.234579  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:27:44.234618  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.237848  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.238297  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.238332  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.238604  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.238788  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.238988  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.239154  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.326083  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0122 21:27:44.355837  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0122 21:27:44.387644  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 21:27:44.418003  314650 provision.go:87] duration metric: took 552.65522ms to configureAuth
	I0122 21:27:44.418039  314650 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:27:44.418347  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:44.418475  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.421349  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.421796  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.421839  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.422067  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.422301  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.422470  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.422603  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.422810  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:44.423129  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:44.423156  314650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0122 21:27:44.671197  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0122 21:27:44.671232  314650 machine.go:96] duration metric: took 1.163046458s to provisionDockerMachine
	I0122 21:27:44.671247  314650 start.go:293] postStartSetup for "newest-cni-489789" (driver="kvm2")
	I0122 21:27:44.671261  314650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:27:44.671289  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.671667  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:27:44.671704  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.674811  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.675137  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.675164  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.675350  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.675624  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.675817  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.675987  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.759194  314650 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:27:44.764553  314650 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:27:44.764591  314650 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/addons for local assets ...
	I0122 21:27:44.764668  314650 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-247142/.minikube/files for local assets ...
	I0122 21:27:44.764741  314650 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem -> 2547542.pem in /etc/ssl/certs
	I0122 21:27:44.764835  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:27:44.778239  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:27:44.807409  314650 start.go:296] duration metric: took 136.131239ms for postStartSetup
	I0122 21:27:44.807474  314650 fix.go:56] duration metric: took 20.777566838s for fixHost
	I0122 21:27:44.807580  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.810883  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.811279  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.811312  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.811472  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.811736  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.811908  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.812086  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.812268  314650 main.go:141] libmachine: Using SSH client type: native
	I0122 21:27:44.812448  314650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I0122 21:27:44.812459  314650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:27:44.915903  314650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737581264.870208902
	
	I0122 21:27:44.915934  314650 fix.go:216] guest clock: 1737581264.870208902
	I0122 21:27:44.915945  314650 fix.go:229] Guest: 2025-01-22 21:27:44.870208902 +0000 UTC Remote: 2025-01-22 21:27:44.807479632 +0000 UTC m=+20.941890306 (delta=62.72927ms)
	I0122 21:27:44.915983  314650 fix.go:200] guest clock delta is within tolerance: 62.72927ms
	I0122 21:27:44.915991  314650 start.go:83] releasing machines lock for "newest-cni-489789", held for 20.886101347s
	I0122 21:27:44.916019  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.916292  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:44.919374  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.919795  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.919831  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.920026  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.920725  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.920966  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:44.921087  314650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:27:44.921144  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.921271  314650 ssh_runner.go:195] Run: cat /version.json
	I0122 21:27:44.921303  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:44.924275  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924511  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924546  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.924566  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924759  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.924827  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:44.924871  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:44.924995  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.925090  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:44.925199  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.925283  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:44.925319  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:44.925420  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:44.925532  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:45.025072  314650 ssh_runner.go:195] Run: systemctl --version
	I0122 21:27:45.032652  314650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0122 21:27:45.187726  314650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:27:45.194767  314650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:27:45.194851  314650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:27:45.213610  314650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:27:45.213644  314650 start.go:495] detecting cgroup driver to use...
	I0122 21:27:45.213723  314650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 21:27:45.231803  314650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 21:27:45.247682  314650 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:27:45.247801  314650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:27:45.263581  314650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:27:45.279536  314650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:27:45.406663  314650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:27:45.562297  314650 docker.go:233] disabling docker service ...
	I0122 21:27:45.562383  314650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:27:45.579904  314650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:27:45.595144  314650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:27:45.739957  314650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:27:45.866024  314650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:27:45.882728  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:27:45.907297  314650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0122 21:27:45.907388  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.920271  314650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0122 21:27:45.920341  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.933095  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.945711  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.958348  314650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:27:45.972409  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:45.989090  314650 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:46.011819  314650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0122 21:27:46.025229  314650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:27:46.038393  314650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:27:46.038475  314650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:27:46.055252  314650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:27:46.068173  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:46.196285  314650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0122 21:27:46.295821  314650 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0122 21:27:46.295921  314650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0122 21:27:46.301506  314650 start.go:563] Will wait 60s for crictl version
	I0122 21:27:46.301587  314650 ssh_runner.go:195] Run: which crictl
	I0122 21:27:46.306074  314650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:27:46.352624  314650 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0122 21:27:46.352727  314650 ssh_runner.go:195] Run: crio --version
	I0122 21:27:46.385398  314650 ssh_runner.go:195] Run: crio --version
	I0122 21:27:46.422040  314650 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0122 21:27:46.423591  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetIP
	I0122 21:27:46.426902  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:46.427305  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:46.427332  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:46.427679  314650 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0122 21:27:46.432609  314650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:27:46.448941  314650 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0122 21:27:46.450413  314650 kubeadm.go:883] updating cluster {Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:27:46.450575  314650 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 21:27:46.450683  314650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:27:46.496073  314650 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0122 21:27:46.496165  314650 ssh_runner.go:195] Run: which lz4
	I0122 21:27:46.500895  314650 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:27:46.505854  314650 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:27:46.505909  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0122 21:27:48.159588  314650 crio.go:462] duration metric: took 1.658732075s to copy over tarball
	I0122 21:27:48.159687  314650 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:27:50.643587  314650 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.483861806s)
	I0122 21:27:50.643623  314650 crio.go:469] duration metric: took 2.483996867s to extract the tarball
	I0122 21:27:50.643632  314650 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:27:50.683708  314650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:27:50.732147  314650 crio.go:514] all images are preloaded for cri-o runtime.
	I0122 21:27:50.732183  314650 cache_images.go:84] Images are preloaded, skipping loading
	I0122 21:27:50.732194  314650 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.32.1 crio true true} ...
	I0122 21:27:50.732350  314650 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-489789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:27:50.732425  314650 ssh_runner.go:195] Run: crio config
	I0122 21:27:50.789877  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:50.789904  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:50.789920  314650 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0122 21:27:50.789953  314650 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-489789 NodeName:newest-cni-489789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:27:50.790132  314650 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-489789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.146"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:27:50.790261  314650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:27:50.801652  314650 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:27:50.801742  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:27:50.813168  314650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0122 21:27:50.832707  314650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:27:50.852375  314650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0122 21:27:50.875185  314650 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I0122 21:27:50.879818  314650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:27:50.893992  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:51.040056  314650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:27:51.060681  314650 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789 for IP: 192.168.50.146
	I0122 21:27:51.060711  314650 certs.go:194] generating shared ca certs ...
	I0122 21:27:51.060737  314650 certs.go:226] acquiring lock for ca certs: {Name:mkdd0d4b6fa26e9115895f82be25875589405ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:51.060940  314650 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key
	I0122 21:27:51.061018  314650 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key
	I0122 21:27:51.061036  314650 certs.go:256] generating profile certs ...
	I0122 21:27:51.061157  314650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/client.key
	I0122 21:27:51.061251  314650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.key.de28c3d3
	I0122 21:27:51.061317  314650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.key
	I0122 21:27:51.061482  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem (1338 bytes)
	W0122 21:27:51.061526  314650 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754_empty.pem, impossibly tiny 0 bytes
	I0122 21:27:51.061539  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:27:51.061572  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/ca.pem (1082 bytes)
	I0122 21:27:51.061603  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:27:51.061636  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/certs/key.pem (1675 bytes)
	I0122 21:27:51.061692  314650 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem (1708 bytes)
	I0122 21:27:51.062633  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:27:51.098858  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 21:27:51.145243  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:27:51.180019  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 21:27:51.208916  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0122 21:27:51.237139  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:27:51.270858  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:27:51.306734  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/newest-cni-489789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:27:51.341424  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/certs/254754.pem --> /usr/share/ca-certificates/254754.pem (1338 bytes)
	I0122 21:27:51.370701  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/ssl/certs/2547542.pem --> /usr/share/ca-certificates/2547542.pem (1708 bytes)
	I0122 21:27:51.402552  314650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:27:51.431817  314650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:27:51.452816  314650 ssh_runner.go:195] Run: openssl version
	I0122 21:27:51.460223  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2547542.pem && ln -fs /usr/share/ca-certificates/2547542.pem /etc/ssl/certs/2547542.pem"
	I0122 21:27:51.474716  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.480785  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:11 /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.480874  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2547542.pem
	I0122 21:27:51.489093  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2547542.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:27:51.501870  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:27:51.514659  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.520559  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.520713  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:27:51.527928  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:27:51.541856  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254754.pem && ln -fs /usr/share/ca-certificates/254754.pem /etc/ssl/certs/254754.pem"
	I0122 21:27:51.555463  314650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.561295  314650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:11 /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.561368  314650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254754.pem
	I0122 21:27:51.568531  314650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/254754.pem /etc/ssl/certs/51391683.0"
	I0122 21:27:51.584716  314650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:27:51.590762  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 21:27:51.598592  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 21:27:51.605666  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 21:27:51.613414  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 21:27:51.621894  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 21:27:51.629916  314650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 21:27:51.636995  314650 kubeadm.go:392] StartCluster: {Name:newest-cni-489789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-489789 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:27:51.637138  314650 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0122 21:27:51.637358  314650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:27:51.691610  314650 cri.go:89] found id: ""
	I0122 21:27:51.691683  314650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:27:51.703943  314650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0122 21:27:51.703976  314650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0122 21:27:51.704044  314650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0122 21:27:51.715920  314650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0122 21:27:51.716767  314650 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-489789" does not appear in /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:51.717203  314650 kubeconfig.go:62] /home/jenkins/minikube-integration/20288-247142/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-489789" cluster setting kubeconfig missing "newest-cni-489789" context setting]
	I0122 21:27:51.717901  314650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:51.729230  314650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0122 21:27:51.741794  314650 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.146
	I0122 21:27:51.741842  314650 kubeadm.go:1160] stopping kube-system containers ...
	I0122 21:27:51.741859  314650 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0122 21:27:51.741927  314650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:27:51.789068  314650 cri.go:89] found id: ""
	I0122 21:27:51.789171  314650 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0122 21:27:51.809451  314650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:27:51.821492  314650 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:27:51.821515  314650 kubeadm.go:157] found existing configuration files:
	
	I0122 21:27:51.821564  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:27:51.833428  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:27:51.833507  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:27:51.845423  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:27:51.856151  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:27:51.856247  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:27:51.868260  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:27:51.879595  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:27:51.879671  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:27:51.892482  314650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:27:51.905485  314650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:27:51.905558  314650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:27:51.917498  314650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:27:51.930487  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:52.072199  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.069420  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.321398  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.393577  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:53.471920  314650 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:27:53.472027  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:53.972577  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.472481  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.972531  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:54.989674  314650 api_server.go:72] duration metric: took 1.517756303s to wait for apiserver process to appear ...
	I0122 21:27:54.989707  314650 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:27:54.989729  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.208473  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:27:57.208515  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:27:57.208536  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.292726  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:27:57.292780  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:27:57.490170  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.499620  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:57.499655  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:57.990312  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:57.998214  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:57.998257  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:58.489875  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:58.496876  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:27:58.496913  314650 api_server.go:103] status: https://192.168.50.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:27:58.990600  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:58.995909  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 200:
	ok
	I0122 21:27:59.004894  314650 api_server.go:141] control plane version: v1.32.1
	I0122 21:27:59.004943  314650 api_server.go:131] duration metric: took 4.015227175s to wait for apiserver health ...
	I0122 21:27:59.004977  314650 cni.go:84] Creating CNI manager for ""
	I0122 21:27:59.004987  314650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 21:27:59.006689  314650 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:27:59.008029  314650 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:27:59.020070  314650 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:27:59.044659  314650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:27:59.055648  314650 system_pods.go:59] 8 kube-system pods found
	I0122 21:27:59.055702  314650 system_pods.go:61] "coredns-668d6bf9bc-j4plt" [148d05e6-8770-4af7-bdbe-cd5a5f8dd29f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:27:59.055713  314650 system_pods.go:61] "etcd-newest-cni-489789" [c8170cf7-3a96-44e4-b00e-18d85c1b7986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:27:59.055724  314650 system_pods.go:61] "kube-apiserver-newest-cni-489789" [6ffe2038-7158-4e18-b918-97456a0a041d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:27:59.055732  314650 system_pods.go:61] "kube-controller-manager-newest-cni-489789" [b725f80f-9d41-4128-8d21-fe71b2b2279e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:27:59.055738  314650 system_pods.go:61] "kube-proxy-ln878" [010174ac-4a25-4a32-bc4b-18e7f04b94c8] Running
	I0122 21:27:59.055754  314650 system_pods.go:61] "kube-scheduler-newest-cni-489789" [3b8995ec-114b-4e51-94bf-f38ec3c2a1fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:27:59.055766  314650 system_pods.go:61] "metrics-server-f79f97bbb-hwz7d" [93786d6e-095b-4543-9a36-eb57b54ab6b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:27:59.055774  314650 system_pods.go:61] "storage-provisioner" [9d443319-6b6b-446a-a3cb-242157e85a55] Running
	I0122 21:27:59.055788  314650 system_pods.go:74] duration metric: took 11.091605ms to wait for pod list to return data ...
	I0122 21:27:59.055802  314650 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:27:59.060105  314650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:27:59.060148  314650 node_conditions.go:123] node cpu capacity is 2
	I0122 21:27:59.060164  314650 node_conditions.go:105] duration metric: took 4.355866ms to run NodePressure ...
	I0122 21:27:59.060188  314650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:27:59.384018  314650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 21:27:59.398090  314650 ops.go:34] apiserver oom_adj: -16
	I0122 21:27:59.398128  314650 kubeadm.go:597] duration metric: took 7.694142189s to restartPrimaryControlPlane
	I0122 21:27:59.398142  314650 kubeadm.go:394] duration metric: took 7.761160632s to StartCluster
	I0122 21:27:59.398170  314650 settings.go:142] acquiring lock: {Name:mkd1753661c2351dd9318eb8eab12d9164b6fe23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:59.398290  314650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:27:59.400046  314650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/kubeconfig: {Name:mkb9f04b779d499bc5ba460c332717e5db92b17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:27:59.400419  314650 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0122 21:27:59.400556  314650 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0122 21:27:59.400665  314650 config.go:182] Loaded profile config "newest-cni-489789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:27:59.400686  314650 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-489789"
	I0122 21:27:59.400707  314650 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-489789"
	W0122 21:27:59.400716  314650 addons.go:247] addon storage-provisioner should already be in state true
	I0122 21:27:59.400726  314650 addons.go:69] Setting default-storageclass=true in profile "newest-cni-489789"
	I0122 21:27:59.400741  314650 addons.go:69] Setting dashboard=true in profile "newest-cni-489789"
	I0122 21:27:59.400761  314650 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-489789"
	I0122 21:27:59.400768  314650 addons.go:238] Setting addon dashboard=true in "newest-cni-489789"
	W0122 21:27:59.400778  314650 addons.go:247] addon dashboard should already be in state true
	I0122 21:27:59.400815  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.400765  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.401204  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401204  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401235  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.401237  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401262  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.401321  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.400718  314650 addons.go:69] Setting metrics-server=true in profile "newest-cni-489789"
	I0122 21:27:59.401464  314650 addons.go:238] Setting addon metrics-server=true in "newest-cni-489789"
	W0122 21:27:59.401475  314650 addons.go:247] addon metrics-server should already be in state true
	I0122 21:27:59.401509  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.401887  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.401975  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.402025  314650 out.go:177] * Verifying Kubernetes components...
	I0122 21:27:59.403359  314650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:27:59.420697  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0122 21:27:59.420697  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0122 21:27:59.421021  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0122 21:27:59.421349  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.421459  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.421547  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.422098  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422121  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422122  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422144  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422325  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.422349  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.422401  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
	I0122 21:27:59.423146  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423151  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423148  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.423359  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.423430  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.423817  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.423841  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.423816  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.423882  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.424405  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.425054  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.425105  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.425288  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.425335  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.427261  314650 addons.go:238] Setting addon default-storageclass=true in "newest-cni-489789"
	W0122 21:27:59.427282  314650 addons.go:247] addon default-storageclass should already be in state true
	I0122 21:27:59.427316  314650 host.go:66] Checking if "newest-cni-489789" exists ...
	I0122 21:27:59.427674  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.427723  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.446713  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43103
	I0122 21:27:59.446783  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I0122 21:27:59.451272  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.451373  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.451946  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.451969  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.452101  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.452121  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.452538  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.452791  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.452801  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.453414  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.455400  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.455881  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.457716  314650 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0122 21:27:59.457751  314650 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0122 21:27:59.459475  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0122 21:27:59.459504  314650 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0122 21:27:59.459539  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.460864  314650 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0122 21:27:59.462275  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0122 21:27:59.462311  314650 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0122 21:27:59.462354  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.466673  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.467509  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.467541  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.467851  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.468096  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.468288  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.468589  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.468600  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.469258  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.469308  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.469497  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.469679  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.469875  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.470056  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.473781  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0122 21:27:59.473966  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39141
	I0122 21:27:59.474357  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.474615  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.474910  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.474936  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.475242  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.475262  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.475362  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.475908  314650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 21:27:59.475957  314650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:27:59.476056  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.476285  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.478535  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.480540  314650 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:27:59.481982  314650 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:27:59.482013  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 21:27:59.482045  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.485683  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.486142  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.486177  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.486465  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.486710  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.486889  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.487038  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.494246  314650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0122 21:27:59.494801  314650 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:27:59.495426  314650 main.go:141] libmachine: Using API Version  1
	I0122 21:27:59.495453  314650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:27:59.495905  314650 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:27:59.496130  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetState
	I0122 21:27:59.498296  314650 main.go:141] libmachine: (newest-cni-489789) Calling .DriverName
	I0122 21:27:59.498565  314650 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 21:27:59.498586  314650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 21:27:59.498611  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHHostname
	I0122 21:27:59.501861  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.502313  314650 main.go:141] libmachine: (newest-cni-489789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:b4:d9", ip: ""} in network mk-newest-cni-489789: {Iface:virbr2 ExpiryTime:2025-01-22 22:27:36 +0000 UTC Type:0 Mac:52:54:00:c5:b4:d9 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:newest-cni-489789 Clientid:01:52:54:00:c5:b4:d9}
	I0122 21:27:59.502346  314650 main.go:141] libmachine: (newest-cni-489789) DBG | domain newest-cni-489789 has defined IP address 192.168.50.146 and MAC address 52:54:00:c5:b4:d9 in network mk-newest-cni-489789
	I0122 21:27:59.502646  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHPort
	I0122 21:27:59.502865  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHKeyPath
	I0122 21:27:59.503077  314650 main.go:141] libmachine: (newest-cni-489789) Calling .GetSSHUsername
	I0122 21:27:59.503233  314650 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/newest-cni-489789/id_rsa Username:docker}
	I0122 21:27:59.724824  314650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:27:59.770671  314650 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:27:59.770782  314650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:27:59.794707  314650 api_server.go:72] duration metric: took 394.235725ms to wait for apiserver process to appear ...
	I0122 21:27:59.794739  314650 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:27:59.794764  314650 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0122 21:27:59.830916  314650 api_server.go:279] https://192.168.50.146:8443/healthz returned 200:
	ok
	I0122 21:27:59.833823  314650 api_server.go:141] control plane version: v1.32.1
	I0122 21:27:59.833866  314650 api_server.go:131] duration metric: took 39.117571ms to wait for apiserver health ...
	I0122 21:27:59.833879  314650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:27:59.842548  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:27:59.866014  314650 system_pods.go:59] 8 kube-system pods found
	I0122 21:27:59.866078  314650 system_pods.go:61] "coredns-668d6bf9bc-j4plt" [148d05e6-8770-4af7-bdbe-cd5a5f8dd29f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:27:59.866091  314650 system_pods.go:61] "etcd-newest-cni-489789" [c8170cf7-3a96-44e4-b00e-18d85c1b7986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:27:59.866103  314650 system_pods.go:61] "kube-apiserver-newest-cni-489789" [6ffe2038-7158-4e18-b918-97456a0a041d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:27:59.866113  314650 system_pods.go:61] "kube-controller-manager-newest-cni-489789" [b725f80f-9d41-4128-8d21-fe71b2b2279e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:27:59.866119  314650 system_pods.go:61] "kube-proxy-ln878" [010174ac-4a25-4a32-bc4b-18e7f04b94c8] Running
	I0122 21:27:59.866128  314650 system_pods.go:61] "kube-scheduler-newest-cni-489789" [3b8995ec-114b-4e51-94bf-f38ec3c2a1fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:27:59.866137  314650 system_pods.go:61] "metrics-server-f79f97bbb-hwz7d" [93786d6e-095b-4543-9a36-eb57b54ab6b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:27:59.866143  314650 system_pods.go:61] "storage-provisioner" [9d443319-6b6b-446a-a3cb-242157e85a55] Running
	I0122 21:27:59.866152  314650 system_pods.go:74] duration metric: took 32.265403ms to wait for pod list to return data ...
	I0122 21:27:59.866168  314650 default_sa.go:34] waiting for default service account to be created ...
	I0122 21:27:59.871064  314650 default_sa.go:45] found service account: "default"
	I0122 21:27:59.871106  314650 default_sa.go:55] duration metric: took 4.928382ms for default service account to be created ...
	I0122 21:27:59.871125  314650 kubeadm.go:582] duration metric: took 470.664674ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0122 21:27:59.871157  314650 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:27:59.875089  314650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:27:59.875125  314650 node_conditions.go:123] node cpu capacity is 2
	I0122 21:27:59.875139  314650 node_conditions.go:105] duration metric: took 3.96814ms to run NodePressure ...
	I0122 21:27:59.875155  314650 start.go:241] waiting for startup goroutines ...
	I0122 21:27:59.879100  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 21:27:59.991147  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0122 21:27:59.991183  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0122 21:28:00.010416  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0122 21:28:00.010448  314650 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0122 21:28:00.034463  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0122 21:28:00.034502  314650 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0122 21:28:00.066923  314650 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:28:00.066963  314650 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0122 21:28:00.112671  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:28:00.155556  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0122 21:28:00.155594  314650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0122 21:28:00.224676  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0122 21:28:00.224717  314650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0122 21:28:00.402769  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0122 21:28:00.402799  314650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0122 21:28:00.611017  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0122 21:28:00.611060  314650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0122 21:28:00.746957  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0122 21:28:00.747012  314650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0122 21:28:00.817833  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0122 21:28:00.817864  314650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0122 21:28:00.905629  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0122 21:28:00.905658  314650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0122 21:28:00.973450  314650 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:28:00.973488  314650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0122 21:28:01.033649  314650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:28:01.902642  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.023480792s)
	I0122 21:28:01.902735  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.902750  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.902850  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.060261694s)
	I0122 21:28:01.902903  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.902915  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.904921  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.904989  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.904996  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905018  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.905027  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.905036  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.905033  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905093  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.905102  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.905104  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905492  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905513  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905534  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.905540  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.905567  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.905581  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:01.914609  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:01.914638  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:01.914975  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:01.915021  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:01.915036  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.003384  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.890658634s)
	I0122 21:28:02.003466  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.003495  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.003851  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:02.003914  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.003943  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.003952  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.003960  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.004229  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.004247  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.004261  314650 addons.go:479] Verifying addon metrics-server=true in "newest-cni-489789"
	I0122 21:28:02.891241  314650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.857486932s)
	I0122 21:28:02.891533  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.891588  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.894087  314650 main.go:141] libmachine: (newest-cni-489789) DBG | Closing plugin on server side
	I0122 21:28:02.894100  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.894130  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.894140  314650 main.go:141] libmachine: Making call to close driver server
	I0122 21:28:02.894149  314650 main.go:141] libmachine: (newest-cni-489789) Calling .Close
	I0122 21:28:02.894509  314650 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:28:02.894564  314650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:28:02.896533  314650 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-489789 addons enable metrics-server
	
	I0122 21:28:02.898219  314650 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0122 21:28:02.900518  314650 addons.go:514] duration metric: took 3.499959979s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0122 21:28:02.900586  314650 start.go:246] waiting for cluster config update ...
	I0122 21:28:02.900604  314650 start.go:255] writing updated cluster config ...
	I0122 21:28:02.900904  314650 ssh_runner.go:195] Run: rm -f paused
	I0122 21:28:02.965147  314650 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0122 21:28:02.967085  314650 out.go:177] * Done! kubectl is now configured to use "newest-cni-489789" cluster and "default" namespace by default
	I0122 21:29:27.087272  312675 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:29:27.087393  312675 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:29:27.089567  312675 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:29:27.089666  312675 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:29:27.089781  312675 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:29:27.089958  312675 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:29:27.090084  312675 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:29:27.090165  312675 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:29:27.092167  312675 out.go:235]   - Generating certificates and keys ...
	I0122 21:29:27.092283  312675 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:29:27.092358  312675 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:29:27.092462  312675 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:29:27.092535  312675 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:29:27.092611  312675 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:29:27.092682  312675 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:29:27.092771  312675 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:29:27.092848  312675 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:29:27.092976  312675 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:29:27.093094  312675 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:29:27.093166  312675 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:29:27.093261  312675 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:29:27.093350  312675 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:29:27.093398  312675 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:29:27.093476  312675 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:29:27.093559  312675 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:29:27.093650  312675 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:29:27.093720  312675 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:29:27.093761  312675 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:29:27.093818  312675 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:29:27.095338  312675 out.go:235]   - Booting up control plane ...
	I0122 21:29:27.095468  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:29:27.095555  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:29:27.095632  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:29:27.095710  312675 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:29:27.095838  312675 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:29:27.095878  312675 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:29:27.095937  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096106  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096195  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096453  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096565  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.096796  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.096867  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.097090  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.097177  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:29:27.097367  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:29:27.097386  312675 kubeadm.go:310] 
	I0122 21:29:27.097443  312675 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:29:27.097512  312675 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:29:27.097527  312675 kubeadm.go:310] 
	I0122 21:29:27.097557  312675 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:29:27.097611  312675 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:29:27.097761  312675 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:29:27.097783  312675 kubeadm.go:310] 
	I0122 21:29:27.097878  312675 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:29:27.097928  312675 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:29:27.097955  312675 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:29:27.097962  312675 kubeadm.go:310] 
	I0122 21:29:27.098055  312675 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:29:27.098120  312675 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:29:27.098127  312675 kubeadm.go:310] 
	I0122 21:29:27.098272  312675 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:29:27.098357  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:29:27.098434  312675 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:29:27.098533  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:29:27.098585  312675 kubeadm.go:310] 
	W0122 21:29:27.098687  312675 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0122 21:29:27.098731  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0122 21:29:27.599261  312675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:29:27.617267  312675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:29:27.629164  312675 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:29:27.629190  312675 kubeadm.go:157] found existing configuration files:
	
	I0122 21:29:27.629255  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:29:27.641001  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:29:27.641072  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:29:27.653446  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:29:27.666334  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:29:27.666426  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:29:27.678551  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:29:27.689687  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:29:27.689757  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:29:27.702030  312675 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:29:27.713507  312675 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:29:27.713585  312675 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:29:27.726067  312675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:29:27.816417  312675 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0122 21:29:27.816555  312675 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:29:27.995432  312675 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:29:27.995599  312675 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:29:27.995745  312675 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 21:29:28.218104  312675 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:29:28.220056  312675 out.go:235]   - Generating certificates and keys ...
	I0122 21:29:28.220190  312675 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:29:28.220278  312675 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:29:28.220383  312675 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:29:28.220486  312675 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:29:28.220573  312675 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:29:28.220648  312675 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:29:28.220880  312675 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:29:28.221175  312675 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:29:28.222058  312675 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:29:28.222351  312675 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:29:28.222442  312675 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:29:28.222530  312675 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:29:28.304455  312675 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:29:28.572192  312675 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:29:28.869356  312675 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:29:29.053609  312675 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:29:29.082264  312675 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:29:29.082429  312675 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:29:29.082503  312675 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:29:29.253931  312675 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:29:29.256894  312675 out.go:235]   - Booting up control plane ...
	I0122 21:29:29.257044  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:29:29.267513  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:29:29.269154  312675 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:29:29.270276  312675 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:29:29.274228  312675 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 21:30:09.277116  312675 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0122 21:30:09.277238  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:09.277504  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:14.278173  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:14.278454  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:24.278945  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:24.279149  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:30:44.279492  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:30:44.279715  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:31:24.278351  312675 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0122 21:31:24.278612  312675 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0122 21:31:24.278628  312675 kubeadm.go:310] 
	I0122 21:31:24.278664  312675 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0122 21:31:24.278723  312675 kubeadm.go:310] 		timed out waiting for the condition
	I0122 21:31:24.278735  312675 kubeadm.go:310] 
	I0122 21:31:24.278775  312675 kubeadm.go:310] 	This error is likely caused by:
	I0122 21:31:24.278827  312675 kubeadm.go:310] 		- The kubelet is not running
	I0122 21:31:24.278956  312675 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0122 21:31:24.278981  312675 kubeadm.go:310] 
	I0122 21:31:24.279066  312675 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0122 21:31:24.279109  312675 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0122 21:31:24.279140  312675 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0122 21:31:24.279147  312675 kubeadm.go:310] 
	I0122 21:31:24.279253  312675 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0122 21:31:24.279353  312675 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0122 21:31:24.279373  312675 kubeadm.go:310] 
	I0122 21:31:24.279516  312675 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0122 21:31:24.279639  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0122 21:31:24.279754  312675 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0122 21:31:24.279837  312675 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0122 21:31:24.279895  312675 kubeadm.go:310] 
	I0122 21:31:24.280842  312675 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:31:24.280984  312675 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0122 21:31:24.281074  312675 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0122 21:31:24.281148  312675 kubeadm.go:394] duration metric: took 7m59.138107768s to StartCluster
	I0122 21:31:24.281220  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0122 21:31:24.281302  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0122 21:31:24.331184  312675 cri.go:89] found id: ""
	I0122 21:31:24.331225  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.331235  312675 logs.go:284] No container was found matching "kube-apiserver"
	I0122 21:31:24.331242  312675 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0122 21:31:24.331309  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0122 21:31:24.372934  312675 cri.go:89] found id: ""
	I0122 21:31:24.372963  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.372972  312675 logs.go:284] No container was found matching "etcd"
	I0122 21:31:24.372979  312675 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0122 21:31:24.373034  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0122 21:31:24.413239  312675 cri.go:89] found id: ""
	I0122 21:31:24.413274  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.413284  312675 logs.go:284] No container was found matching "coredns"
	I0122 21:31:24.413290  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0122 21:31:24.413347  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0122 21:31:24.452513  312675 cri.go:89] found id: ""
	I0122 21:31:24.452552  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.452564  312675 logs.go:284] No container was found matching "kube-scheduler"
	I0122 21:31:24.452573  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0122 21:31:24.452644  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0122 21:31:24.491580  312675 cri.go:89] found id: ""
	I0122 21:31:24.491617  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.491629  312675 logs.go:284] No container was found matching "kube-proxy"
	I0122 21:31:24.491637  312675 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0122 21:31:24.491710  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0122 21:31:24.544823  312675 cri.go:89] found id: ""
	I0122 21:31:24.544856  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.544865  312675 logs.go:284] No container was found matching "kube-controller-manager"
	I0122 21:31:24.544872  312675 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0122 21:31:24.544930  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0122 21:31:24.585047  312675 cri.go:89] found id: ""
	I0122 21:31:24.585085  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.585099  312675 logs.go:284] No container was found matching "kindnet"
	I0122 21:31:24.585108  312675 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0122 21:31:24.585175  312675 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0122 21:31:24.624152  312675 cri.go:89] found id: ""
	I0122 21:31:24.624189  312675 logs.go:282] 0 containers: []
	W0122 21:31:24.624201  312675 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0122 21:31:24.624216  312675 logs.go:123] Gathering logs for describe nodes ...
	I0122 21:31:24.624231  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0122 21:31:24.717945  312675 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0122 21:31:24.717971  312675 logs.go:123] Gathering logs for CRI-O ...
	I0122 21:31:24.717989  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0122 21:31:24.826216  312675 logs.go:123] Gathering logs for container status ...
	I0122 21:31:24.826260  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0122 21:31:24.878403  312675 logs.go:123] Gathering logs for kubelet ...
	I0122 21:31:24.878439  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0122 21:31:24.931058  312675 logs.go:123] Gathering logs for dmesg ...
	I0122 21:31:24.931102  312675 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0122 21:31:24.947080  312675 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0122 21:31:24.947171  312675 out.go:270] * 
	W0122 21:31:24.947310  312675 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:31:24.947331  312675 out.go:270] * 
	W0122 21:31:24.948119  312675 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 21:31:24.951080  312675 out.go:201] 
	W0122 21:31:24.952375  312675 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0122 21:31:24.952433  312675 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0122 21:31:24.952459  312675 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0122 21:31:24.954056  312675 out.go:201] 
	
	
	==> CRI-O <==
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.254025639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582388253993615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c41b5fa-3131-4d26-8c8f-053175b3ef4d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.254860242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf041341-18b7-49fb-93d7-7bd0c4d006b8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.255038573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf041341-18b7-49fb-93d7-7bd0c4d006b8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.255093496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cf041341-18b7-49fb-93d7-7bd0c4d006b8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.290788366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4cff657e-0cd9-451d-a41c-e3eb5e97068a name=/runtime.v1.RuntimeService/Version
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.290896057Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4cff657e-0cd9-451d-a41c-e3eb5e97068a name=/runtime.v1.RuntimeService/Version
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.292338905Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eeb71ceb-3736-4e12-912b-6b8417db6a61 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.292754447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582388292723230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eeb71ceb-3736-4e12-912b-6b8417db6a61 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.293481671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d216e9e2-5326-4cc9-a8b1-016ef47307a7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.293542858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d216e9e2-5326-4cc9-a8b1-016ef47307a7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.293591172Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d216e9e2-5326-4cc9-a8b1-016ef47307a7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.332330837Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9bda2de9-4706-4e6a-8675-f0d64b6d6a77 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.332411432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9bda2de9-4706-4e6a-8675-f0d64b6d6a77 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.334075640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09b57ae8-0252-4a31-b160-183d5168fca0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.334554037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582388334528867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09b57ae8-0252-4a31-b160-183d5168fca0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.335263712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad313b8e-08a3-4d3e-8ce9-59d80be24254 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.335318742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad313b8e-08a3-4d3e-8ce9-59d80be24254 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.335357544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ad313b8e-08a3-4d3e-8ce9-59d80be24254 name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.376233144Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37ba4ffa-1a12-4f46-bba0-802c82e779d3 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.376321186Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37ba4ffa-1a12-4f46-bba0-802c82e779d3 name=/runtime.v1.RuntimeService/Version
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.377884722Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b38224b6-8041-4572-8842-b3a536091041 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.378404850Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737582388378380565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b38224b6-8041-4572-8842-b3a536091041 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.379319894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5f978c0-39ce-4db1-a6a0-37e915a4e18c name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.379378660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5f978c0-39ce-4db1-a6a0-37e915a4e18c name=/runtime.v1.RuntimeService/ListContainers
	Jan 22 21:46:28 old-k8s-version-181389 crio[623]: time="2025-01-22 21:46:28.379421283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c5f978c0-39ce-4db1-a6a0-37e915a4e18c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan22 21:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057641] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044754] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan22 21:23] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.204474] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.706641] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.615943] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +0.071910] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069973] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.211991] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.154871] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.286617] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +7.265364] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.070492] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.978592] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[ +12.734713] kauditd_printk_skb: 46 callbacks suppressed
	[Jan22 21:27] systemd-fstab-generator[4928]: Ignoring "noauto" option for root device
	[Jan22 21:29] systemd-fstab-generator[5204]: Ignoring "noauto" option for root device
	[  +0.083243] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:46:28 up 23 min,  0 users,  load average: 0.07, 0.03, 0.03
	Linux old-k8s-version-181389 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0004906f0)
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009bbef0, 0x4f0ac20, 0xc000a698b0, 0x1, 0xc0001020c0)
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000264700, 0xc0001020c0)
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bcb730, 0xc00094ad80)
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jan 22 21:46:23 old-k8s-version-181389 kubelet[7031]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jan 22 21:46:23 old-k8s-version-181389 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 22 21:46:23 old-k8s-version-181389 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 22 21:46:24 old-k8s-version-181389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 175.
	Jan 22 21:46:24 old-k8s-version-181389 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 22 21:46:24 old-k8s-version-181389 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 22 21:46:24 old-k8s-version-181389 kubelet[7040]: I0122 21:46:24.238839    7040 server.go:416] Version: v1.20.0
	Jan 22 21:46:24 old-k8s-version-181389 kubelet[7040]: I0122 21:46:24.239291    7040 server.go:837] Client rotation is on, will bootstrap in background
	Jan 22 21:46:24 old-k8s-version-181389 kubelet[7040]: I0122 21:46:24.241322    7040 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 22 21:46:24 old-k8s-version-181389 kubelet[7040]: W0122 21:46:24.242304    7040 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 22 21:46:24 old-k8s-version-181389 kubelet[7040]: I0122 21:46:24.242586    7040 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181389 -n old-k8s-version-181389
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 2 (254.460229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-181389" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (360.49s)

                                                
                                    

Test pass (268/318)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.41
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.17
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.32.1/json-events 5.66
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.07
18 TestDownloadOnly/v1.32.1/DeleteAll 0.16
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.16
21 TestBinaryMirror 0.68
22 TestOffline 87.18
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 202.56
31 TestAddons/serial/GCPAuth/Namespaces 2.5
32 TestAddons/serial/GCPAuth/FakeCredentials 9.6
35 TestAddons/parallel/Registry 16.53
37 TestAddons/parallel/InspektorGadget 12.48
38 TestAddons/parallel/MetricsServer 6.33
40 TestAddons/parallel/CSI 45.84
41 TestAddons/parallel/Headlamp 20.85
42 TestAddons/parallel/CloudSpanner 5.72
43 TestAddons/parallel/LocalPath 57.78
44 TestAddons/parallel/NvidiaDevicePlugin 7.38
45 TestAddons/parallel/Yakd 11.45
47 TestAddons/StoppedEnableDisable 91.38
48 TestCertOptions 64.8
49 TestCertExpiration 276.13
51 TestForceSystemdFlag 64.46
52 TestForceSystemdEnv 99.76
54 TestKVMDriverInstallOrUpdate 5.72
58 TestErrorSpam/setup 45.06
59 TestErrorSpam/start 0.42
60 TestErrorSpam/status 0.85
61 TestErrorSpam/pause 1.81
62 TestErrorSpam/unpause 2.11
63 TestErrorSpam/stop 5.9
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 59.64
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 38.93
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.56
75 TestFunctional/serial/CacheCmd/cache/add_local 2.06
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.95
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 36.13
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.79
86 TestFunctional/serial/LogsFileCmd 1.67
87 TestFunctional/serial/InvalidService 4.31
89 TestFunctional/parallel/ConfigCmd 0.44
90 TestFunctional/parallel/DashboardCmd 21.04
91 TestFunctional/parallel/DryRun 0.37
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.07
97 TestFunctional/parallel/ServiceCmdConnect 11.65
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 51.19
101 TestFunctional/parallel/SSHCmd 0.44
102 TestFunctional/parallel/CpCmd 1.48
103 TestFunctional/parallel/MySQL 35.51
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.55
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
113 TestFunctional/parallel/License 0.27
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.39
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
118 TestFunctional/parallel/ImageCommands/ImageBuild 6.33
119 TestFunctional/parallel/ImageCommands/Setup 1.6
129 TestFunctional/parallel/ServiceCmd/DeployApp 11.25
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.11
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.95
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.69
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.57
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.71
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.32
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 4.01
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
138 TestFunctional/parallel/ServiceCmd/List 0.39
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
140 TestFunctional/parallel/ProfileCmd/profile_list 0.5
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
143 TestFunctional/parallel/ServiceCmd/Format 0.45
144 TestFunctional/parallel/MountCmd/any-port 7.88
145 TestFunctional/parallel/ServiceCmd/URL 0.52
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
149 TestFunctional/parallel/Version/short 0.06
150 TestFunctional/parallel/Version/components 0.93
151 TestFunctional/parallel/MountCmd/specific-port 2.12
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.61
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 213.57
160 TestMultiControlPlane/serial/DeployApp 8.27
161 TestMultiControlPlane/serial/PingHostFromPods 1.47
162 TestMultiControlPlane/serial/AddWorkerNode 56.38
163 TestMultiControlPlane/serial/NodeLabels 0.08
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.96
165 TestMultiControlPlane/serial/CopyFile 14.75
166 TestMultiControlPlane/serial/StopSecondaryNode 91.8
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
168 TestMultiControlPlane/serial/RestartSecondaryNode 53.98
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.95
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 442.16
171 TestMultiControlPlane/serial/DeleteSecondaryNode 19.04
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
173 TestMultiControlPlane/serial/StopCluster 273.09
174 TestMultiControlPlane/serial/RestartCluster 122.12
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
176 TestMultiControlPlane/serial/AddSecondaryNode 81.75
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.02
181 TestJSONOutput/start/Command 90.89
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.84
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.74
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.46
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
209 TestMainNoArgs 0.06
210 TestMinikubeProfile 97.09
213 TestMountStart/serial/StartWithMountFirst 27.22
214 TestMountStart/serial/VerifyMountFirst 0.43
215 TestMountStart/serial/StartWithMountSecond 28.06
216 TestMountStart/serial/VerifyMountSecond 0.42
217 TestMountStart/serial/DeleteFirst 0.96
218 TestMountStart/serial/VerifyMountPostDelete 0.44
219 TestMountStart/serial/Stop 1.39
220 TestMountStart/serial/RestartStopped 27.15
221 TestMountStart/serial/VerifyMountPostStop 0.42
224 TestMultiNode/serial/FreshStart2Nodes 116.23
225 TestMultiNode/serial/DeployApp2Nodes 5.77
226 TestMultiNode/serial/PingHostFrom2Pods 0.91
227 TestMultiNode/serial/AddNode 53.74
228 TestMultiNode/serial/MultiNodeLabels 0.07
229 TestMultiNode/serial/ProfileList 0.66
230 TestMultiNode/serial/CopyFile 8.12
231 TestMultiNode/serial/StopNode 3.28
232 TestMultiNode/serial/StartAfterStop 42.48
233 TestMultiNode/serial/RestartKeepsNodes 348.78
234 TestMultiNode/serial/DeleteNode 2.94
235 TestMultiNode/serial/StopMultiNode 181.97
236 TestMultiNode/serial/RestartMultiNode 180.63
237 TestMultiNode/serial/ValidateNameConflict 46.65
244 TestScheduledStopUnix 117.09
248 TestRunningBinaryUpgrade 157.83
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
254 TestNoKubernetes/serial/StartWithK8s 126.1
262 TestNetworkPlugins/group/false 4.25
266 TestNoKubernetes/serial/StartWithStopK8s 40.38
267 TestNoKubernetes/serial/Start 50.55
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
269 TestNoKubernetes/serial/ProfileList 28.19
270 TestNoKubernetes/serial/Stop 1.75
271 TestNoKubernetes/serial/StartNoArgs 29.89
272 TestStoppedBinaryUpgrade/Setup 0.46
273 TestStoppedBinaryUpgrade/Upgrade 122.58
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
283 TestPause/serial/Start 112.5
284 TestNetworkPlugins/group/auto/Start 75.46
285 TestNetworkPlugins/group/auto/KubeletFlags 0.24
286 TestNetworkPlugins/group/auto/NetCatPod 13.31
287 TestPause/serial/SecondStartNoReconfiguration 42.2
288 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
289 TestNetworkPlugins/group/kindnet/Start 97.13
290 TestNetworkPlugins/group/auto/DNS 0.17
291 TestNetworkPlugins/group/auto/Localhost 0.16
292 TestNetworkPlugins/group/auto/HairPin 0.14
293 TestNetworkPlugins/group/calico/Start 90.47
294 TestPause/serial/Pause 1.02
295 TestPause/serial/VerifyStatus 0.3
296 TestPause/serial/Unpause 0.94
297 TestPause/serial/PauseAgain 1.2
298 TestPause/serial/DeletePaused 1.19
299 TestPause/serial/VerifyDeletedResources 0.57
300 TestNetworkPlugins/group/custom-flannel/Start 95.3
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.37
304 TestNetworkPlugins/group/calico/ControllerPod 6.01
305 TestNetworkPlugins/group/kindnet/DNS 0.17
306 TestNetworkPlugins/group/kindnet/Localhost 0.15
307 TestNetworkPlugins/group/kindnet/HairPin 0.15
308 TestNetworkPlugins/group/calico/KubeletFlags 0.24
309 TestNetworkPlugins/group/calico/NetCatPod 11.28
310 TestNetworkPlugins/group/calico/DNS 0.21
311 TestNetworkPlugins/group/calico/Localhost 0.24
312 TestNetworkPlugins/group/calico/HairPin 0.2
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.28
315 TestNetworkPlugins/group/enable-default-cni/Start 99.57
316 TestNetworkPlugins/group/custom-flannel/DNS 0.21
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
319 TestNetworkPlugins/group/flannel/Start 99.91
320 TestNetworkPlugins/group/bridge/Start 104.75
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.38
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
330 TestNetworkPlugins/group/flannel/NetCatPod 11.31
332 TestStartStop/group/no-preload/serial/FirstStart 78.41
333 TestNetworkPlugins/group/flannel/DNS 0.32
334 TestNetworkPlugins/group/flannel/Localhost 0.18
335 TestNetworkPlugins/group/flannel/HairPin 0.22
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
337 TestNetworkPlugins/group/bridge/NetCatPod 11.27
338 TestNetworkPlugins/group/bridge/DNS 10.17
340 TestStartStop/group/embed-certs/serial/FirstStart 98.12
341 TestNetworkPlugins/group/bridge/Localhost 0.14
342 TestNetworkPlugins/group/bridge/HairPin 0.16
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 109.09
345 TestStartStop/group/no-preload/serial/DeployApp 11.35
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.35
347 TestStartStop/group/no-preload/serial/Stop 91.08
348 TestStartStop/group/embed-certs/serial/DeployApp 9.31
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
350 TestStartStop/group/embed-certs/serial/Stop 91.19
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.31
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.07
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
356 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
357 TestStartStop/group/embed-certs/serial/SecondStart 304.44
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
362 TestStartStop/group/old-k8s-version/serial/Stop 1.53
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
367 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
368 TestStartStop/group/embed-certs/serial/Pause 3.25
370 TestStartStop/group/newest-cni/serial/FirstStart 52.78
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.94
373 TestStartStop/group/newest-cni/serial/Stop 7.4
374 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
375 TestStartStop/group/newest-cni/serial/SecondStart 39.42
376 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
377 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
379 TestStartStop/group/newest-cni/serial/Pause 2.94
x
+
TestDownloadOnly/v1.20.0/json-events (12.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-562691 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-562691 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.407211089s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0122 20:02:17.060355  254754 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0122 20:02:17.060509  254754 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-562691
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-562691: exit status 85 (74.824258ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-562691 | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC |          |
	|         | -p download-only-562691        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 20:02:04
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 20:02:04.704162  254766 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:02:04.704334  254766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:02:04.704345  254766 out.go:358] Setting ErrFile to fd 2...
	I0122 20:02:04.704349  254766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:02:04.704566  254766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	W0122 20:02:04.704712  254766 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20288-247142/.minikube/config/config.json: open /home/jenkins/minikube-integration/20288-247142/.minikube/config/config.json: no such file or directory
	I0122 20:02:04.705359  254766 out.go:352] Setting JSON to true
	I0122 20:02:04.707262  254766 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9871,"bootTime":1737566254,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 20:02:04.707414  254766 start.go:139] virtualization: kvm guest
	I0122 20:02:04.710368  254766 out.go:97] [download-only-562691] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 20:02:04.710581  254766 notify.go:220] Checking for updates...
	W0122 20:02:04.710595  254766 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball: no such file or directory
	I0122 20:02:04.712414  254766 out.go:169] MINIKUBE_LOCATION=20288
	I0122 20:02:04.714503  254766 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 20:02:04.716135  254766 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 20:02:04.717773  254766 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 20:02:04.719496  254766 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0122 20:02:04.722257  254766 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0122 20:02:04.722602  254766 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 20:02:04.764067  254766 out.go:97] Using the kvm2 driver based on user configuration
	I0122 20:02:04.764114  254766 start.go:297] selected driver: kvm2
	I0122 20:02:04.764125  254766 start.go:901] validating driver "kvm2" against <nil>
	I0122 20:02:04.764557  254766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:02:04.764664  254766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 20:02:04.782439  254766 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 20:02:04.782525  254766 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0122 20:02:04.783126  254766 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0122 20:02:04.784146  254766 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0122 20:02:04.784203  254766 cni.go:84] Creating CNI manager for ""
	I0122 20:02:04.784274  254766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 20:02:04.784285  254766 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 20:02:04.784359  254766 start.go:340] cluster config:
	{Name:download-only-562691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-562691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 20:02:04.784578  254766 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:02:04.786707  254766 out.go:97] Downloading VM boot image ...
	I0122 20:02:04.786776  254766 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0122 20:02:11.370343  254766 out.go:97] Starting "download-only-562691" primary control-plane node in "download-only-562691" cluster
	I0122 20:02:11.370411  254766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0122 20:02:11.398708  254766 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0122 20:02:11.398752  254766 cache.go:56] Caching tarball of preloaded images
	I0122 20:02:11.399775  254766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0122 20:02:11.401509  254766 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0122 20:02:11.401539  254766 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0122 20:02:11.436559  254766 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-562691 host does not exist
	  To start a cluster, run: "minikube start -p download-only-562691"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-562691
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (5.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-489470 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-489470 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.659277266s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (5.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0122 20:02:23.127288  254754 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0122 20:02:23.127391  254754 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-489470
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-489470: exit status 85 (74.0155ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-562691 | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC |                     |
	|         | -p download-only-562691        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC | 22 Jan 25 20:02 UTC |
	| delete  | -p download-only-562691        | download-only-562691 | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC | 22 Jan 25 20:02 UTC |
	| start   | -o=json --download-only        | download-only-489470 | jenkins | v1.35.0 | 22 Jan 25 20:02 UTC |                     |
	|         | -p download-only-489470        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 20:02:17
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 20:02:17.518641  254982 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:02:17.518783  254982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:02:17.518795  254982 out.go:358] Setting ErrFile to fd 2...
	I0122 20:02:17.518800  254982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:02:17.519019  254982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 20:02:17.519725  254982 out.go:352] Setting JSON to true
	I0122 20:02:17.520737  254982 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9884,"bootTime":1737566254,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 20:02:17.520827  254982 start.go:139] virtualization: kvm guest
	I0122 20:02:17.523277  254982 out.go:97] [download-only-489470] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 20:02:17.523516  254982 notify.go:220] Checking for updates...
	I0122 20:02:17.525079  254982 out.go:169] MINIKUBE_LOCATION=20288
	I0122 20:02:17.526976  254982 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 20:02:17.528739  254982 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 20:02:17.530546  254982 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 20:02:17.532319  254982 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0122 20:02:17.535092  254982 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0122 20:02:17.535411  254982 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 20:02:17.574804  254982 out.go:97] Using the kvm2 driver based on user configuration
	I0122 20:02:17.574866  254982 start.go:297] selected driver: kvm2
	I0122 20:02:17.574876  254982 start.go:901] validating driver "kvm2" against <nil>
	I0122 20:02:17.575288  254982 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:02:17.575410  254982 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-247142/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 20:02:17.593745  254982 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 20:02:17.593826  254982 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0122 20:02:17.594426  254982 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0122 20:02:17.594601  254982 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0122 20:02:17.594639  254982 cni.go:84] Creating CNI manager for ""
	I0122 20:02:17.594700  254982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0122 20:02:17.594710  254982 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 20:02:17.594779  254982 start.go:340] cluster config:
	{Name:download-only-489470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-489470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 20:02:17.594903  254982 iso.go:125] acquiring lock: {Name:mk30bd26a0b89dc7e1dff013948e67816ce26cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:02:17.596874  254982 out.go:97] Starting "download-only-489470" primary control-plane node in "download-only-489470" cluster
	I0122 20:02:17.596915  254982 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 20:02:17.627821  254982 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0122 20:02:17.627882  254982 cache.go:56] Caching tarball of preloaded images
	I0122 20:02:17.628066  254982 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 20:02:17.629952  254982 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0122 20:02:17.629997  254982 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0122 20:02:17.658750  254982 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2af56a340efcc3949401b47b9a5d537 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0122 20:02:21.571945  254982 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0122 20:02:21.572060  254982 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20288-247142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0122 20:02:22.364662  254982 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0122 20:02:22.365118  254982 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/download-only-489470/config.json ...
	I0122 20:02:22.365174  254982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/download-only-489470/config.json: {Name:mk7160c7ba718705dd6ae9aa8276766aeae67952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:02:22.366159  254982 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0122 20:02:22.367068  254982 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20288-247142/.minikube/cache/linux/amd64/v1.32.1/kubectl
	
	
	* The control-plane node download-only-489470 host does not exist
	  To start a cluster, run: "minikube start -p download-only-489470"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-489470
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I0122 20:02:23.838538  254754 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-292789 --alsologtostderr --binary-mirror http://127.0.0.1:43139 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-292789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-292789
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (87.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-341845 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-341845 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m25.931154671s)
helpers_test.go:175: Cleaning up "offline-crio-341845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-341845
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-341845: (1.247872703s)
--- PASS: TestOffline (87.18s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-772234
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-772234: exit status 85 (71.619028ms)

                                                
                                                
-- stdout --
	* Profile "addons-772234" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-772234"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-772234
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-772234: exit status 85 (72.301504ms)

                                                
                                                
-- stdout --
	* Profile "addons-772234" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-772234"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (202.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-772234 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-772234 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m22.555435027s)
--- PASS: TestAddons/Setup (202.56s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-772234 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-772234 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-772234 get secret gcp-auth -n new-namespace: exit status 1 (86.137466ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-772234 logs -l app=gcp-auth -n gcp-auth
I0122 20:05:47.760266  254754 retry.go:31] will retry after 2.204785954s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2025/01/22 20:05:46 GCP Auth Webhook started!
	2025/01/22 20:05:47 Ready to marshal response ...
	2025/01/22 20:05:47 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-772234 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.50s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.6s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-772234 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-772234 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8d2a71b7-8b58-4680-a61c-accbf7d6a820] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8d2a71b7-8b58-4680-a61c-accbf7d6a820] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.00529309s
addons_test.go:633: (dbg) Run:  kubectl --context addons-772234 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-772234 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-772234 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.60s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.84491ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-zjk8j" [915fd237-ebbe-434c-adc5-f3abec60767f] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005366358s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zwvcf" [8a6211f0-8029-4ac7-9a77-513808839094] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004498843s
addons_test.go:331: (dbg) Run:  kubectl --context addons-772234 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-772234 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-772234 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.413250199s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 ip
2025/01/22 20:06:23 [DEBUG] GET http://192.168.39.58:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.53s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.48s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ztqh5" [56559743-1efb-46c5-b804-2b6f28221ed8] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005259033s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-772234 addons disable inspektor-gadget --alsologtostderr -v=1: (6.473169903s)
--- PASS: TestAddons/parallel/InspektorGadget (12.48s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.50947ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-qnc8h" [e2d700be-750b-4d4d-a086-8d4000faa1e3] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005357764s
addons_test.go:402: (dbg) Run:  kubectl --context addons-772234 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-772234 addons disable metrics-server --alsologtostderr -v=1: (1.249357496s)
--- PASS: TestAddons/parallel/MetricsServer (6.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0122 20:06:24.786826  254754 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0122 20:06:24.792942  254754 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0122 20:06:24.792994  254754 kapi.go:107] duration metric: took 6.194207ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 6.21147ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-772234 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-772234 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [413de554-47e5-4cf2-b4d4-0598cda0c8c5] Pending
helpers_test.go:344: "task-pv-pod" [413de554-47e5-4cf2-b4d4-0598cda0c8c5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [413de554-47e5-4cf2-b4d4-0598cda0c8c5] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004699482s
addons_test.go:511: (dbg) Run:  kubectl --context addons-772234 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-772234 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-772234 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-772234 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-772234 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-772234 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-772234 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5c3cc19d-11fd-4d38-a853-0bb58100a9d8] Pending
helpers_test.go:344: "task-pv-pod-restore" [5c3cc19d-11fd-4d38-a853-0bb58100a9d8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5c3cc19d-11fd-4d38-a853-0bb58100a9d8] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00492653s
addons_test.go:553: (dbg) Run:  kubectl --context addons-772234 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-772234 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-772234 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-772234 addons disable volumesnapshots --alsologtostderr -v=1: (1.226487716s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-772234 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.030155734s)
--- PASS: TestAddons/parallel/CSI (45.84s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-772234 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-772234 --alsologtostderr -v=1: (1.040804851s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-rwz4f" [73aca1be-0dc7-444a-af74-6509dbc57596] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-rwz4f" [73aca1be-0dc7-444a-af74-6509dbc57596] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.005666597s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-772234 addons disable headlamp --alsologtostderr -v=1: (6.799768796s)
--- PASS: TestAddons/parallel/Headlamp (20.85s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-tns6k" [ef11a8eb-e6d3-454f-82e1-85b9cccbb5ad] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00676456s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-772234 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-772234 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-772234 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [034377b2-e09c-4226-9489-4c0c589fc9b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [034377b2-e09c-4226-9489-4c0c589fc9b2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [034377b2-e09c-4226-9489-4c0c589fc9b2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.005953475s
addons_test.go:906: (dbg) Run:  kubectl --context addons-772234 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 ssh "cat /opt/local-path-provisioner/pvc-b5af557f-06dc-4193-b387-b33d4ee260a6_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-772234 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-772234 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-772234 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.587591s)
--- PASS: TestAddons/parallel/LocalPath (57.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.38s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-28lq2" [4d14b2d9-bcc1-4a92-9453-8af3817ffa52] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.017168389s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-772234 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.359621214s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.38s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-fj78p" [1e3e4bba-f016-4369-8f05-e6435f765341] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.091813336s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-772234 addons disable yakd --alsologtostderr -v=1: (6.359386913s)
--- PASS: TestAddons/parallel/Yakd (11.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-772234
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-772234: (1m31.032816197s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-772234
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-772234
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-772234
--- PASS: TestAddons/StoppedEnableDisable (91.38s)

                                                
                                    
x
+
TestCertOptions (64.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-837962 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-837962 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m3.248797018s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-837962 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-837962 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-837962 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-837962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-837962
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-837962: (1.060558661s)
--- PASS: TestCertOptions (64.80s)

                                                
                                    
x
+
TestCertExpiration (276.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-673511 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-673511 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (48.184012892s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-673511 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-673511 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (46.782217063s)
helpers_test.go:175: Cleaning up "cert-expiration-673511" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-673511
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-673511: (1.165360428s)
--- PASS: TestCertExpiration (276.13s)

                                                
                                    
x
+
TestForceSystemdFlag (64.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-765715 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-765715 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m3.373282848s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-765715 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-765715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-765715
E0122 21:08:47.448634  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestForceSystemdFlag (64.46s)

                                                
                                    
x
+
TestForceSystemdEnv (99.76s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-374267 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-374267 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m38.657072124s)
helpers_test.go:175: Cleaning up "force-systemd-env-374267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-374267
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-374267: (1.098315302s)
--- PASS: TestForceSystemdEnv (99.76s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.72s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0122 21:11:25.403271  254754 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0122 21:11:25.403451  254754 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0122 21:11:25.443779  254754 install.go:62] docker-machine-driver-kvm2: exit status 1
W0122 21:11:25.444286  254754 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0122 21:11:25.444361  254754 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2139954403/001/docker-machine-driver-kvm2
I0122 21:11:25.925298  254754 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2139954403/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc00071e3b0 gz:0xc00071e3b8 tar:0xc00071e350 tar.bz2:0xc00071e370 tar.gz:0xc00071e380 tar.xz:0xc00071e390 tar.zst:0xc00071e3a0 tbz2:0xc00071e370 tgz:0xc00071e380 txz:0xc00071e390 tzst:0xc00071e3a0 xz:0xc00071e3c0 zip:0xc00071e3d0 zst:0xc00071e3c8] Getters:map[file:0xc00097e7a0 http:0xc0000759a0 https:0xc0000759f0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0122 21:11:25.925354  254754 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2139954403/001/docker-machine-driver-kvm2
I0122 21:11:28.427706  254754 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0122 21:11:28.427830  254754 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0122 21:11:28.464553  254754 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0122 21:11:28.464597  254754 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0122 21:11:28.464681  254754 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0122 21:11:28.464723  254754 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2139954403/002/docker-machine-driver-kvm2
I0122 21:11:28.818799  254754 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2139954403/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc00071e3b0 gz:0xc00071e3b8 tar:0xc00071e350 tar.bz2:0xc00071e370 tar.gz:0xc00071e380 tar.xz:0xc00071e390 tar.zst:0xc00071e3a0 tbz2:0xc00071e370 tgz:0xc00071e380 txz:0xc00071e390 tzst:0xc00071e3a0 xz:0xc00071e3c0 zip:0xc00071e3d0 zst:0xc00071e3c8] Getters:map[file:0xc001f26e30 http:0xc000521b30 https:0xc000521bd0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0122 21:11:28.818903  254754 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2139954403/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.72s)

                                                
                                    
x
+
TestErrorSpam/setup (45.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-643511 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-643511 --driver=kvm2  --container-runtime=crio
E0122 20:10:50.266817  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:10:50.273378  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:10:50.284883  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:10:50.306487  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:10:50.348059  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:10:50.429711  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:10:50.591423  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:10:50.913238  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:10:51.555494  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:10:52.837153  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:10:55.399154  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:11:00.521173  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:11:10.763335  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-643511 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-643511 --driver=kvm2  --container-runtime=crio: (45.062868429s)
--- PASS: TestErrorSpam/setup (45.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 start --dry-run
--- PASS: TestErrorSpam/start (0.42s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.11s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 unpause
--- PASS: TestErrorSpam/unpause (2.11s)

                                                
                                    
x
+
TestErrorSpam/stop (5.9s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 stop: (2.488162637s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 stop: (1.397175308s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 stop
E0122 20:11:31.244907  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-643511 --log_dir /tmp/nospam-643511 stop: (2.013299031s)
--- PASS: TestErrorSpam/stop (5.90s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20288-247142/.minikube/files/etc/test/nested/copy/254754/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136272 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0122 20:12:12.206576  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-136272 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (59.641085318s)
--- PASS: TestFunctional/serial/StartWithProxy (59.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.93s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0122 20:12:32.380304  254754 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136272 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-136272 --alsologtostderr -v=8: (38.926956277s)
functional_test.go:663: soft start took 38.927683643s for "functional-136272" cluster.
I0122 20:13:11.307678  254754 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (38.93s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-136272 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-136272 cache add registry.k8s.io/pause:3.1: (1.204310467s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-136272 cache add registry.k8s.io/pause:3.3: (1.166047939s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-136272 cache add registry.k8s.io/pause:latest: (1.191273206s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-136272 /tmp/TestFunctionalserialCacheCmdcacheadd_local3273132666/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 cache add minikube-local-cache-test:functional-136272
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-136272 cache add minikube-local-cache-test:functional-136272: (1.679282964s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 cache delete minikube-local-cache-test:functional-136272
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-136272
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136272 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (240.454552ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-136272 cache reload: (1.161419624s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 kubectl -- --context functional-136272 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-136272 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136272 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0122 20:13:34.131144  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-136272 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.132569505s)
functional_test.go:761: restart took 36.13271583s for "functional-136272" cluster.
I0122 20:13:55.918500  254754 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (36.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-136272 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-136272 logs: (1.789994322s)
--- PASS: TestFunctional/serial/LogsCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 logs --file /tmp/TestFunctionalserialLogsFileCmd4138022097/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-136272 logs --file /tmp/TestFunctionalserialLogsFileCmd4138022097/001/logs.txt: (1.672618207s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.67s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-136272 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-136272
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-136272: exit status 115 (592.753934ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.117:31689 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-136272 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136272 config get cpus: exit status 14 (65.12031ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136272 config get cpus: exit status 14 (75.46464ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-136272 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-136272 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 263057: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136272 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-136272 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (201.939661ms)

                                                
                                                
-- stdout --
	* [functional-136272] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:14:18.989927  262620 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:14:18.990070  262620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:14:18.990078  262620 out.go:358] Setting ErrFile to fd 2...
	I0122 20:14:18.990085  262620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:14:18.990407  262620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 20:14:18.991132  262620 out.go:352] Setting JSON to false
	I0122 20:14:18.992606  262620 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10605,"bootTime":1737566254,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 20:14:18.992769  262620 start.go:139] virtualization: kvm guest
	I0122 20:14:18.994636  262620 out.go:177] * [functional-136272] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 20:14:18.996604  262620 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 20:14:18.996654  262620 notify.go:220] Checking for updates...
	I0122 20:14:18.999499  262620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 20:14:19.001931  262620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 20:14:19.003205  262620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 20:14:19.004780  262620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 20:14:19.006649  262620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 20:14:19.009255  262620 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 20:14:19.009917  262620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:14:19.010009  262620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:14:19.037936  262620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40843
	I0122 20:14:19.039255  262620 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:14:19.040923  262620 main.go:141] libmachine: Using API Version  1
	I0122 20:14:19.040949  262620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:14:19.041494  262620 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:14:19.041725  262620 main.go:141] libmachine: (functional-136272) Calling .DriverName
	I0122 20:14:19.042096  262620 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 20:14:19.042654  262620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:14:19.042735  262620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:14:19.064302  262620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45169
	I0122 20:14:19.064992  262620 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:14:19.065664  262620 main.go:141] libmachine: Using API Version  1
	I0122 20:14:19.065693  262620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:14:19.066249  262620 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:14:19.066487  262620 main.go:141] libmachine: (functional-136272) Calling .DriverName
	I0122 20:14:19.119417  262620 out.go:177] * Using the kvm2 driver based on existing profile
	I0122 20:14:19.121050  262620 start.go:297] selected driver: kvm2
	I0122 20:14:19.121076  262620 start.go:901] validating driver "kvm2" against &{Name:functional-136272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-136272 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 20:14:19.121241  262620 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 20:14:19.123834  262620 out.go:201] 
	W0122 20:14:19.125579  262620 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0122 20:14:19.126824  262620 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136272 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136272 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-136272 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (183.063611ms)

                                                
                                                
-- stdout --
	* [functional-136272] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:14:18.819095  262561 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:14:18.819229  262561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:14:18.819242  262561 out.go:358] Setting ErrFile to fd 2...
	I0122 20:14:18.819249  262561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:14:18.819663  262561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 20:14:18.821020  262561 out.go:352] Setting JSON to false
	I0122 20:14:18.822219  262561 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10605,"bootTime":1737566254,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 20:14:18.822359  262561 start.go:139] virtualization: kvm guest
	I0122 20:14:18.824704  262561 out.go:177] * [functional-136272] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0122 20:14:18.826707  262561 notify.go:220] Checking for updates...
	I0122 20:14:18.826724  262561 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 20:14:18.828815  262561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 20:14:18.830734  262561 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 20:14:18.832428  262561 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 20:14:18.833898  262561 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 20:14:18.835574  262561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 20:14:18.837698  262561 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 20:14:18.838401  262561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:14:18.838519  262561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:14:18.856180  262561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33141
	I0122 20:14:18.856680  262561 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:14:18.857407  262561 main.go:141] libmachine: Using API Version  1
	I0122 20:14:18.857426  262561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:14:18.857981  262561 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:14:18.858157  262561 main.go:141] libmachine: (functional-136272) Calling .DriverName
	I0122 20:14:18.858499  262561 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 20:14:18.858855  262561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:14:18.858929  262561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:14:18.876120  262561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0122 20:14:18.876576  262561 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:14:18.877125  262561 main.go:141] libmachine: Using API Version  1
	I0122 20:14:18.877148  262561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:14:18.877506  262561 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:14:18.877768  262561 main.go:141] libmachine: (functional-136272) Calling .DriverName
	I0122 20:14:18.918613  262561 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0122 20:14:18.920194  262561 start.go:297] selected driver: kvm2
	I0122 20:14:18.920216  262561 start.go:901] validating driver "kvm2" against &{Name:functional-136272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-136272 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 20:14:18.920386  262561 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 20:14:18.923081  262561 out.go:201] 
	W0122 20:14:18.924575  262561 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0122 20:14:18.925979  262561 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-136272 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-136272 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-vtnjk" [6bed384c-f6a6-4117-a216-8dffa9b0a8eb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-vtnjk" [6bed384c-f6a6-4117-a216-8dffa9b0a8eb] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004833478s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.117:31315
functional_test.go:1675: http://192.168.39.117:31315: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-vtnjk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.117:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.117:31315
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [640c0699-73aa-436b-967b-35422f29119d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005590136s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-136272 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-136272 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-136272 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-136272 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [13ab58f7-44d6-470f-91f8-4fc31d3303b8] Pending
helpers_test.go:344: "sp-pod" [13ab58f7-44d6-470f-91f8-4fc31d3303b8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [13ab58f7-44d6-470f-91f8-4fc31d3303b8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004874634s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-136272 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-136272 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-136272 delete -f testdata/storage-provisioner/pod.yaml: (1.252120184s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-136272 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [86579d4f-a843-4056-a402-ae694fce40ea] Pending
helpers_test.go:344: "sp-pod" [86579d4f-a843-4056-a402-ae694fce40ea] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [86579d4f-a843-4056-a402-ae694fce40ea] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.004234904s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-136272 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh -n functional-136272 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 cp functional-136272:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3006393623/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh -n functional-136272 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh -n functional-136272 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-136272 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-2z6c5" [9b259546-57db-4daf-9fba-c6e473c03856] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-2z6c5" [9b259546-57db-4daf-9fba-c6e473c03856] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 32.003971692s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-136272 exec mysql-58ccfd96bb-2z6c5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-136272 exec mysql-58ccfd96bb-2z6c5 -- mysql -ppassword -e "show databases;": exit status 1 (142.211815ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0122 20:14:54.284502  254754 retry.go:31] will retry after 1.202005441s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-136272 exec mysql-58ccfd96bb-2z6c5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-136272 exec mysql-58ccfd96bb-2z6c5 -- mysql -ppassword -e "show databases;": exit status 1 (170.391808ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0122 20:14:55.657526  254754 retry.go:31] will retry after 1.645426125s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-136272 exec mysql-58ccfd96bb-2z6c5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.51s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/254754/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "sudo cat /etc/test/nested/copy/254754/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/254754.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "sudo cat /etc/ssl/certs/254754.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/254754.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "sudo cat /usr/share/ca-certificates/254754.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2547542.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "sudo cat /etc/ssl/certs/2547542.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2547542.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "sudo cat /usr/share/ca-certificates/2547542.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-136272 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136272 ssh "sudo systemctl is-active docker": exit status 1 (261.584064ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136272 ssh "sudo systemctl is-active containerd": exit status 1 (258.709354ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136272 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-136272
localhost/kicbase/echo-server:functional-136272
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136272 image ls --format short --alsologtostderr:
I0122 20:14:29.637682  263593 out.go:345] Setting OutFile to fd 1 ...
I0122 20:14:29.637815  263593 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:14:29.637820  263593 out.go:358] Setting ErrFile to fd 2...
I0122 20:14:29.637824  263593 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:14:29.638125  263593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
I0122 20:14:29.639132  263593 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0122 20:14:29.639308  263593 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0122 20:14:29.639890  263593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0122 20:14:29.639985  263593 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:14:29.656896  263593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
I0122 20:14:29.657499  263593 main.go:141] libmachine: () Calling .GetVersion
I0122 20:14:29.658199  263593 main.go:141] libmachine: Using API Version  1
I0122 20:14:29.658228  263593 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:14:29.658629  263593 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:14:29.658863  263593 main.go:141] libmachine: (functional-136272) Calling .GetState
I0122 20:14:29.661015  263593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0122 20:14:29.661076  263593 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:14:29.678275  263593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38419
I0122 20:14:29.678747  263593 main.go:141] libmachine: () Calling .GetVersion
I0122 20:14:29.679379  263593 main.go:141] libmachine: Using API Version  1
I0122 20:14:29.679409  263593 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:14:29.679762  263593 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:14:29.679995  263593 main.go:141] libmachine: (functional-136272) Calling .DriverName
I0122 20:14:29.680213  263593 ssh_runner.go:195] Run: systemctl --version
I0122 20:14:29.680266  263593 main.go:141] libmachine: (functional-136272) Calling .GetSSHHostname
I0122 20:14:29.683785  263593 main.go:141] libmachine: (functional-136272) DBG | domain functional-136272 has defined MAC address 52:54:00:21:75:7d in network mk-functional-136272
I0122 20:14:29.684393  263593 main.go:141] libmachine: (functional-136272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:75:7d", ip: ""} in network mk-functional-136272: {Iface:virbr1 ExpiryTime:2025-01-22 21:11:49 +0000 UTC Type:0 Mac:52:54:00:21:75:7d Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-136272 Clientid:01:52:54:00:21:75:7d}
I0122 20:14:29.684433  263593 main.go:141] libmachine: (functional-136272) DBG | domain functional-136272 has defined IP address 192.168.39.117 and MAC address 52:54:00:21:75:7d in network mk-functional-136272
I0122 20:14:29.684581  263593 main.go:141] libmachine: (functional-136272) Calling .GetSSHPort
I0122 20:14:29.684861  263593 main.go:141] libmachine: (functional-136272) Calling .GetSSHKeyPath
I0122 20:14:29.685023  263593 main.go:141] libmachine: (functional-136272) Calling .GetSSHUsername
I0122 20:14:29.685202  263593 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/functional-136272/id_rsa Username:docker}
I0122 20:14:29.816351  263593 ssh_runner.go:195] Run: sudo crictl images --output json
I0122 20:14:29.912902  263593 main.go:141] libmachine: Making call to close driver server
I0122 20:14:29.912917  263593 main.go:141] libmachine: (functional-136272) Calling .Close
I0122 20:14:29.913292  263593 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:14:29.913321  263593 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:14:29.913346  263593 main.go:141] libmachine: Making call to close driver server
I0122 20:14:29.913354  263593 main.go:141] libmachine: (functional-136272) Calling .Close
I0122 20:14:29.913641  263593 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:14:29.913665  263593 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:14:29.913738  263593 main.go:141] libmachine: (functional-136272) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136272 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/my-image                      | functional-136272  | 9818338530ac8 | 1.47MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-136272  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-136272  | c0e6ac0ff656d | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| docker.io/library/nginx                 | latest             | 9bea9f2796e23 | 196MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136272 image ls --format table --alsologtostderr:
I0122 20:14:37.021466  263802 out.go:345] Setting OutFile to fd 1 ...
I0122 20:14:37.021620  263802 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:14:37.021632  263802 out.go:358] Setting ErrFile to fd 2...
I0122 20:14:37.021638  263802 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:14:37.021859  263802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
I0122 20:14:37.022587  263802 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0122 20:14:37.022725  263802 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0122 20:14:37.023114  263802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0122 20:14:37.023193  263802 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:14:37.040351  263802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45417
I0122 20:14:37.040881  263802 main.go:141] libmachine: () Calling .GetVersion
I0122 20:14:37.041534  263802 main.go:141] libmachine: Using API Version  1
I0122 20:14:37.041567  263802 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:14:37.041922  263802 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:14:37.042220  263802 main.go:141] libmachine: (functional-136272) Calling .GetState
I0122 20:14:37.044542  263802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0122 20:14:37.044618  263802 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:14:37.061192  263802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34637
I0122 20:14:37.061706  263802 main.go:141] libmachine: () Calling .GetVersion
I0122 20:14:37.062347  263802 main.go:141] libmachine: Using API Version  1
I0122 20:14:37.062375  263802 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:14:37.062717  263802 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:14:37.062991  263802 main.go:141] libmachine: (functional-136272) Calling .DriverName
I0122 20:14:37.063211  263802 ssh_runner.go:195] Run: systemctl --version
I0122 20:14:37.063246  263802 main.go:141] libmachine: (functional-136272) Calling .GetSSHHostname
I0122 20:14:37.066878  263802 main.go:141] libmachine: (functional-136272) DBG | domain functional-136272 has defined MAC address 52:54:00:21:75:7d in network mk-functional-136272
I0122 20:14:37.067483  263802 main.go:141] libmachine: (functional-136272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:75:7d", ip: ""} in network mk-functional-136272: {Iface:virbr1 ExpiryTime:2025-01-22 21:11:49 +0000 UTC Type:0 Mac:52:54:00:21:75:7d Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-136272 Clientid:01:52:54:00:21:75:7d}
I0122 20:14:37.067529  263802 main.go:141] libmachine: (functional-136272) DBG | domain functional-136272 has defined IP address 192.168.39.117 and MAC address 52:54:00:21:75:7d in network mk-functional-136272
I0122 20:14:37.067685  263802 main.go:141] libmachine: (functional-136272) Calling .GetSSHPort
I0122 20:14:37.067911  263802 main.go:141] libmachine: (functional-136272) Calling .GetSSHKeyPath
I0122 20:14:37.068085  263802 main.go:141] libmachine: (functional-136272) Calling .GetSSHUsername
I0122 20:14:37.068251  263802 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/functional-136272/id_rsa Username:docker}
I0122 20:14:37.223394  263802 ssh_runner.go:195] Run: sudo crictl images --output json
I0122 20:14:37.360301  263802 main.go:141] libmachine: Making call to close driver server
I0122 20:14:37.360332  263802 main.go:141] libmachine: (functional-136272) Calling .Close
I0122 20:14:37.360705  263802 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:14:37.360729  263802 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:14:37.360752  263802 main.go:141] libmachine: Making call to close driver server
I0122 20:14:37.360761  263802 main.go:141] libmachine: (functional-136272) Calling .Close
I0122 20:14:37.361059  263802 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:14:37.361100  263802 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:14:37.361101  263802 main.go:141] libmachine: (functional-136272) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136272 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-136272"],"size":"4943877"},{"id":"9818338530ac851772b9650f486a19c892a34d137137497334f48379e7172441","repoDigests":["localhost/my-image@sha256:f2f6d314bc2d6e4d5c9f56b2df610383b34b1e4672c03c9418a5bc8a62e8491a"],"repoTags":["localhost/my-image:functional-136272"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f
33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab9
89956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"29900f11401a15292c371c439fcd4308a1609414867b452058dee6d9c4a42497","repoDigests":["docker.io/library/52ca9fdf1b13da0b24eb584c529671e6cb49442c3106371f0e39aec18cd45e0d-tmp@sha256:f20ad065775bc70ce1684c1d5f7a59a5f88ff828a2bbe629ea8b80dc407a5ce6"],"repoTags":[],"size":"1466018"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d8
7b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c0e6ac0ff656db164a10e73fe112485ef0f9c91ff3761bfbdc6b2d988d0bb056","repoDigests":["localhost/minikube-local-cache-test@sha256:8a97788c17044fc802dcef8495919321d81b4239e2f7ebfc0702b3fbcbdcedb5"],"repoTags":["localhost/minikube-local-cache-test:functional-136272"],"size":"3330"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568c
a9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b
88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9bea9f2796e236cb18c2b3ad561ff
29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9"],"repoTags":["docker.io/library/nginx:latest"],"size":"195872148"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"15
1021823"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136272 image ls --format json --alsologtostderr:
I0122 20:14:36.642006  263751 out.go:345] Setting OutFile to fd 1 ...
I0122 20:14:36.642179  263751 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:14:36.642221  263751 out.go:358] Setting ErrFile to fd 2...
I0122 20:14:36.642229  263751 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:14:36.642448  263751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
I0122 20:14:36.643177  263751 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0122 20:14:36.643285  263751 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0122 20:14:36.643689  263751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0122 20:14:36.643766  263751 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:14:36.660956  263751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
I0122 20:14:36.661576  263751 main.go:141] libmachine: () Calling .GetVersion
I0122 20:14:36.662308  263751 main.go:141] libmachine: Using API Version  1
I0122 20:14:36.662341  263751 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:14:36.662766  263751 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:14:36.663080  263751 main.go:141] libmachine: (functional-136272) Calling .GetState
I0122 20:14:36.665512  263751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0122 20:14:36.665586  263751 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:14:36.682758  263751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41409
I0122 20:14:36.683297  263751 main.go:141] libmachine: () Calling .GetVersion
I0122 20:14:36.683907  263751 main.go:141] libmachine: Using API Version  1
I0122 20:14:36.683939  263751 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:14:36.684310  263751 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:14:36.684546  263751 main.go:141] libmachine: (functional-136272) Calling .DriverName
I0122 20:14:36.684793  263751 ssh_runner.go:195] Run: systemctl --version
I0122 20:14:36.684828  263751 main.go:141] libmachine: (functional-136272) Calling .GetSSHHostname
I0122 20:14:36.688459  263751 main.go:141] libmachine: (functional-136272) DBG | domain functional-136272 has defined MAC address 52:54:00:21:75:7d in network mk-functional-136272
I0122 20:14:36.689025  263751 main.go:141] libmachine: (functional-136272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:75:7d", ip: ""} in network mk-functional-136272: {Iface:virbr1 ExpiryTime:2025-01-22 21:11:49 +0000 UTC Type:0 Mac:52:54:00:21:75:7d Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-136272 Clientid:01:52:54:00:21:75:7d}
I0122 20:14:36.689067  263751 main.go:141] libmachine: (functional-136272) DBG | domain functional-136272 has defined IP address 192.168.39.117 and MAC address 52:54:00:21:75:7d in network mk-functional-136272
I0122 20:14:36.689288  263751 main.go:141] libmachine: (functional-136272) Calling .GetSSHPort
I0122 20:14:36.689547  263751 main.go:141] libmachine: (functional-136272) Calling .GetSSHKeyPath
I0122 20:14:36.689800  263751 main.go:141] libmachine: (functional-136272) Calling .GetSSHUsername
I0122 20:14:36.689982  263751 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/functional-136272/id_rsa Username:docker}
I0122 20:14:36.805221  263751 ssh_runner.go:195] Run: sudo crictl images --output json
I0122 20:14:36.958572  263751 main.go:141] libmachine: Making call to close driver server
I0122 20:14:36.958586  263751 main.go:141] libmachine: (functional-136272) Calling .Close
I0122 20:14:36.958909  263751 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:14:36.958930  263751 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:14:36.958946  263751 main.go:141] libmachine: Making call to close driver server
I0122 20:14:36.958967  263751 main.go:141] libmachine: (functional-136272) Calling .Close
I0122 20:14:36.958971  263751 main.go:141] libmachine: (functional-136272) DBG | Closing plugin on server side
I0122 20:14:36.959219  263751 main.go:141] libmachine: (functional-136272) DBG | Closing plugin on server side
I0122 20:14:36.959253  263751 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:14:36.959263  263751 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136272 image ls --format yaml --alsologtostderr:
- id: 9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9
repoTags:
- docker.io/library/nginx:latest
size: "195872148"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-136272
size: "4943877"
- id: c0e6ac0ff656db164a10e73fe112485ef0f9c91ff3761bfbdc6b2d988d0bb056
repoDigests:
- localhost/minikube-local-cache-test@sha256:8a97788c17044fc802dcef8495919321d81b4239e2f7ebfc0702b3fbcbdcedb5
repoTags:
- localhost/minikube-local-cache-test:functional-136272
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136272 image ls --format yaml --alsologtostderr:
I0122 20:14:29.979421  263617 out.go:345] Setting OutFile to fd 1 ...
I0122 20:14:29.979611  263617 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:14:29.979622  263617 out.go:358] Setting ErrFile to fd 2...
I0122 20:14:29.979629  263617 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:14:29.980005  263617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
I0122 20:14:29.981007  263617 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0122 20:14:29.981177  263617 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0122 20:14:29.981801  263617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0122 20:14:29.981902  263617 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:14:29.998967  263617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
I0122 20:14:29.999606  263617 main.go:141] libmachine: () Calling .GetVersion
I0122 20:14:30.000322  263617 main.go:141] libmachine: Using API Version  1
I0122 20:14:30.000357  263617 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:14:30.000826  263617 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:14:30.001130  263617 main.go:141] libmachine: (functional-136272) Calling .GetState
I0122 20:14:30.003756  263617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0122 20:14:30.003881  263617 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:14:30.021605  263617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
I0122 20:14:30.022116  263617 main.go:141] libmachine: () Calling .GetVersion
I0122 20:14:30.022703  263617 main.go:141] libmachine: Using API Version  1
I0122 20:14:30.022729  263617 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:14:30.023264  263617 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:14:30.023467  263617 main.go:141] libmachine: (functional-136272) Calling .DriverName
I0122 20:14:30.023739  263617 ssh_runner.go:195] Run: systemctl --version
I0122 20:14:30.023773  263617 main.go:141] libmachine: (functional-136272) Calling .GetSSHHostname
I0122 20:14:30.027371  263617 main.go:141] libmachine: (functional-136272) DBG | domain functional-136272 has defined MAC address 52:54:00:21:75:7d in network mk-functional-136272
I0122 20:14:30.027895  263617 main.go:141] libmachine: (functional-136272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:75:7d", ip: ""} in network mk-functional-136272: {Iface:virbr1 ExpiryTime:2025-01-22 21:11:49 +0000 UTC Type:0 Mac:52:54:00:21:75:7d Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-136272 Clientid:01:52:54:00:21:75:7d}
I0122 20:14:30.027945  263617 main.go:141] libmachine: (functional-136272) DBG | domain functional-136272 has defined IP address 192.168.39.117 and MAC address 52:54:00:21:75:7d in network mk-functional-136272
I0122 20:14:30.028202  263617 main.go:141] libmachine: (functional-136272) Calling .GetSSHPort
I0122 20:14:30.028443  263617 main.go:141] libmachine: (functional-136272) Calling .GetSSHKeyPath
I0122 20:14:30.028600  263617 main.go:141] libmachine: (functional-136272) Calling .GetSSHUsername
I0122 20:14:30.028791  263617 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/functional-136272/id_rsa Username:docker}
I0122 20:14:30.171801  263617 ssh_runner.go:195] Run: sudo crictl images --output json
I0122 20:14:30.240050  263617 main.go:141] libmachine: Making call to close driver server
I0122 20:14:30.240072  263617 main.go:141] libmachine: (functional-136272) Calling .Close
I0122 20:14:30.240502  263617 main.go:141] libmachine: (functional-136272) DBG | Closing plugin on server side
I0122 20:14:30.240587  263617 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:14:30.240597  263617 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:14:30.240613  263617 main.go:141] libmachine: Making call to close driver server
I0122 20:14:30.240625  263617 main.go:141] libmachine: (functional-136272) Calling .Close
I0122 20:14:30.240890  263617 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:14:30.240909  263617 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136272 ssh pgrep buildkitd: exit status 1 (260.169043ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image build -t localhost/my-image:functional-136272 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-136272 image build -t localhost/my-image:functional-136272 testdata/build --alsologtostderr: (5.496146709s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136272 image build -t localhost/my-image:functional-136272 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 29900f11401
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-136272
--> 9818338530a
Successfully tagged localhost/my-image:functional-136272
9818338530ac851772b9650f486a19c892a34d137137497334f48379e7172441
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136272 image build -t localhost/my-image:functional-136272 testdata/build --alsologtostderr:
I0122 20:14:30.570458  263671 out.go:345] Setting OutFile to fd 1 ...
I0122 20:14:30.571316  263671 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:14:30.571342  263671 out.go:358] Setting ErrFile to fd 2...
I0122 20:14:30.571349  263671 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:14:30.571743  263671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
I0122 20:14:30.572859  263671 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0122 20:14:30.573707  263671 config.go:182] Loaded profile config "functional-136272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0122 20:14:30.574176  263671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0122 20:14:30.574258  263671 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:14:30.591857  263671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
I0122 20:14:30.592580  263671 main.go:141] libmachine: () Calling .GetVersion
I0122 20:14:30.593331  263671 main.go:141] libmachine: Using API Version  1
I0122 20:14:30.593356  263671 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:14:30.593753  263671 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:14:30.594008  263671 main.go:141] libmachine: (functional-136272) Calling .GetState
I0122 20:14:30.596284  263671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0122 20:14:30.596352  263671 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:14:30.615217  263671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41213
I0122 20:14:30.615797  263671 main.go:141] libmachine: () Calling .GetVersion
I0122 20:14:30.616459  263671 main.go:141] libmachine: Using API Version  1
I0122 20:14:30.616490  263671 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:14:30.617374  263671 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:14:30.617814  263671 main.go:141] libmachine: (functional-136272) Calling .DriverName
I0122 20:14:30.618444  263671 ssh_runner.go:195] Run: systemctl --version
I0122 20:14:30.618487  263671 main.go:141] libmachine: (functional-136272) Calling .GetSSHHostname
I0122 20:14:30.622125  263671 main.go:141] libmachine: (functional-136272) DBG | domain functional-136272 has defined MAC address 52:54:00:21:75:7d in network mk-functional-136272
I0122 20:14:30.622736  263671 main.go:141] libmachine: (functional-136272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:75:7d", ip: ""} in network mk-functional-136272: {Iface:virbr1 ExpiryTime:2025-01-22 21:11:49 +0000 UTC Type:0 Mac:52:54:00:21:75:7d Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-136272 Clientid:01:52:54:00:21:75:7d}
I0122 20:14:30.622786  263671 main.go:141] libmachine: (functional-136272) DBG | domain functional-136272 has defined IP address 192.168.39.117 and MAC address 52:54:00:21:75:7d in network mk-functional-136272
I0122 20:14:30.623144  263671 main.go:141] libmachine: (functional-136272) Calling .GetSSHPort
I0122 20:14:30.623424  263671 main.go:141] libmachine: (functional-136272) Calling .GetSSHKeyPath
I0122 20:14:30.623581  263671 main.go:141] libmachine: (functional-136272) Calling .GetSSHUsername
I0122 20:14:30.623749  263671 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/functional-136272/id_rsa Username:docker}
I0122 20:14:30.792653  263671 build_images.go:161] Building image from path: /tmp/build.4159575284.tar
I0122 20:14:30.792749  263671 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0122 20:14:30.814514  263671 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4159575284.tar
I0122 20:14:30.830136  263671 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4159575284.tar: stat -c "%s %y" /var/lib/minikube/build/build.4159575284.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4159575284.tar': No such file or directory
I0122 20:14:30.830210  263671 ssh_runner.go:362] scp /tmp/build.4159575284.tar --> /var/lib/minikube/build/build.4159575284.tar (3072 bytes)
I0122 20:14:30.900828  263671 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4159575284
I0122 20:14:30.943079  263671 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4159575284 -xf /var/lib/minikube/build/build.4159575284.tar
I0122 20:14:30.963769  263671 crio.go:315] Building image: /var/lib/minikube/build/build.4159575284
I0122 20:14:30.963864  263671 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-136272 /var/lib/minikube/build/build.4159575284 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0122 20:14:35.899405  263671 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-136272 /var/lib/minikube/build/build.4159575284 --cgroup-manager=cgroupfs: (4.935488111s)
I0122 20:14:35.899531  263671 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4159575284
I0122 20:14:35.925104  263671 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4159575284.tar
I0122 20:14:35.996660  263671 build_images.go:217] Built localhost/my-image:functional-136272 from /tmp/build.4159575284.tar
I0122 20:14:35.996701  263671 build_images.go:133] succeeded building to: functional-136272
I0122 20:14:35.996706  263671 build_images.go:134] failed building to: 
I0122 20:14:35.996734  263671 main.go:141] libmachine: Making call to close driver server
I0122 20:14:35.996746  263671 main.go:141] libmachine: (functional-136272) Calling .Close
I0122 20:14:35.997116  263671 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:14:35.997138  263671 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:14:35.997149  263671 main.go:141] libmachine: Making call to close driver server
I0122 20:14:35.997159  263671 main.go:141] libmachine: (functional-136272) Calling .Close
I0122 20:14:35.997464  263671 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:14:35.997480  263671 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.574009799s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-136272
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-136272 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-136272 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-5pm8v" [e719353d-6154-42c4-9c1d-2b66315c5129] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-5pm8v" [e719353d-6154-42c4-9c1d-2b66315c5129] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.005456074s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image load --daemon kicbase/echo-server:functional-136272 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-136272 image load --daemon kicbase/echo-server:functional-136272 --alsologtostderr: (3.76470905s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image load --daemon kicbase/echo-server:functional-136272 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-136272
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image load --daemon kicbase/echo-server:functional-136272 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image save kicbase/echo-server:functional-136272 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image rm kicbase/echo-server:functional-136272 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-136272
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 image save --daemon kicbase/echo-server:functional-136272 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-136272 image save --daemon kicbase/echo-server:functional-136272 --alsologtostderr: (3.965369029s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-136272
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 service list -o json
functional_test.go:1494: Took "366.55326ms" to run "out/minikube-linux-amd64 -p functional-136272 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "441.95522ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "61.25711ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.117:31577
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "458.511361ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "59.468799ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136272 /tmp/TestFunctionalparallelMountCmdany-port3034327002/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737576857975063954" to /tmp/TestFunctionalparallelMountCmdany-port3034327002/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737576857975063954" to /tmp/TestFunctionalparallelMountCmdany-port3034327002/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737576857975063954" to /tmp/TestFunctionalparallelMountCmdany-port3034327002/001/test-1737576857975063954
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136272 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.021513ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0122 20:14:18.283435  254754 retry.go:31] will retry after 437.095172ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 22 20:14 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 22 20:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 22 20:14 test-1737576857975063954
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh cat /mount-9p/test-1737576857975063954
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-136272 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d5c6db34-a835-4187-957f-e68dc1e216fe] Pending
helpers_test.go:344: "busybox-mount" [d5c6db34-a835-4187-957f-e68dc1e216fe] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d5c6db34-a835-4187-957f-e68dc1e216fe] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d5c6db34-a835-4187-957f-e68dc1e216fe] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005706428s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-136272 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136272 /tmp/TestFunctionalparallelMountCmdany-port3034327002/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.117:31577
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 version -o=json --components
2025/01/22 20:14:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/components (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136272 /tmp/TestFunctionalparallelMountCmdspecific-port2010511268/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136272 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (219.598221ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0122 20:14:26.076375  254754 retry.go:31] will retry after 727.426422ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136272 /tmp/TestFunctionalparallelMountCmdspecific-port2010511268/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136272 ssh "sudo umount -f /mount-9p": exit status 1 (268.617011ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-136272 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136272 /tmp/TestFunctionalparallelMountCmdspecific-port2010511268/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136272 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4009579624/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136272 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4009579624/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136272 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4009579624/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136272 ssh "findmnt -T" /mount1: exit status 1 (334.107806ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0122 20:14:28.314596  254754 retry.go:31] will retry after 348.82089ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136272 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-136272 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136272 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4009579624/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136272 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4009579624/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136272 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4009579624/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-136272
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-136272
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-136272
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (213.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-219230 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0122 20:15:50.257421  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:16:17.973102  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-219230 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m32.775028746s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (213.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-219230 -- rollout status deployment/busybox: (5.688031767s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-5hcfl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-n9ltr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-skzcs -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-5hcfl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-n9ltr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-skzcs -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-5hcfl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-n9ltr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-skzcs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-5hcfl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-5hcfl -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-n9ltr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-n9ltr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-skzcs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-219230 -- exec busybox-58667487b6-skzcs -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-219230 -v=7 --alsologtostderr
E0122 20:19:04.376597  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:04.383055  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:04.394520  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:04.416006  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:04.457488  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:04.539007  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:04.700618  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:05.022416  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:05.664772  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:06.947094  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:09.508791  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:14.630463  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:24.872199  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-219230 -v=7 --alsologtostderr: (55.399542601s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-219230 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp testdata/cp-test.txt ha-219230:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1841413592/001/cp-test_ha-219230.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230:/home/docker/cp-test.txt ha-219230-m02:/home/docker/cp-test_ha-219230_ha-219230-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m02 "sudo cat /home/docker/cp-test_ha-219230_ha-219230-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230:/home/docker/cp-test.txt ha-219230-m03:/home/docker/cp-test_ha-219230_ha-219230-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m03 "sudo cat /home/docker/cp-test_ha-219230_ha-219230-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230:/home/docker/cp-test.txt ha-219230-m04:/home/docker/cp-test_ha-219230_ha-219230-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m04 "sudo cat /home/docker/cp-test_ha-219230_ha-219230-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp testdata/cp-test.txt ha-219230-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1841413592/001/cp-test_ha-219230-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230-m02:/home/docker/cp-test.txt ha-219230:/home/docker/cp-test_ha-219230-m02_ha-219230.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m02 "sudo cat /home/docker/cp-test.txt"
E0122 20:19:45.354519  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230 "sudo cat /home/docker/cp-test_ha-219230-m02_ha-219230.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230-m02:/home/docker/cp-test.txt ha-219230-m03:/home/docker/cp-test_ha-219230-m02_ha-219230-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m03 "sudo cat /home/docker/cp-test_ha-219230-m02_ha-219230-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230-m02:/home/docker/cp-test.txt ha-219230-m04:/home/docker/cp-test_ha-219230-m02_ha-219230-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m04 "sudo cat /home/docker/cp-test_ha-219230-m02_ha-219230-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp testdata/cp-test.txt ha-219230-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1841413592/001/cp-test_ha-219230-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230-m03:/home/docker/cp-test.txt ha-219230:/home/docker/cp-test_ha-219230-m03_ha-219230.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230 "sudo cat /home/docker/cp-test_ha-219230-m03_ha-219230.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230-m03:/home/docker/cp-test.txt ha-219230-m02:/home/docker/cp-test_ha-219230-m03_ha-219230-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m02 "sudo cat /home/docker/cp-test_ha-219230-m03_ha-219230-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230-m03:/home/docker/cp-test.txt ha-219230-m04:/home/docker/cp-test_ha-219230-m03_ha-219230-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m04 "sudo cat /home/docker/cp-test_ha-219230-m03_ha-219230-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp testdata/cp-test.txt ha-219230-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1841413592/001/cp-test_ha-219230-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230-m04:/home/docker/cp-test.txt ha-219230:/home/docker/cp-test_ha-219230-m04_ha-219230.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230 "sudo cat /home/docker/cp-test_ha-219230-m04_ha-219230.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230-m04:/home/docker/cp-test.txt ha-219230-m02:/home/docker/cp-test_ha-219230-m04_ha-219230-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m02 "sudo cat /home/docker/cp-test_ha-219230-m04_ha-219230-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 cp ha-219230-m04:/home/docker/cp-test.txt ha-219230-m03:/home/docker/cp-test_ha-219230-m04_ha-219230-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 ssh -n ha-219230-m03 "sudo cat /home/docker/cp-test_ha-219230-m04_ha-219230-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 node stop m02 -v=7 --alsologtostderr
E0122 20:20:26.316991  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:20:50.258028  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-219230 node stop m02 -v=7 --alsologtostderr: (1m31.06922217s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-219230 status -v=7 --alsologtostderr: exit status 7 (727.11621ms)

                                                
                                                
-- stdout --
	ha-219230
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-219230-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-219230-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-219230-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:21:25.432114  269066 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:21:25.432261  269066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:21:25.432272  269066 out.go:358] Setting ErrFile to fd 2...
	I0122 20:21:25.432278  269066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:21:25.432482  269066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 20:21:25.432693  269066 out.go:352] Setting JSON to false
	I0122 20:21:25.432750  269066 mustload.go:65] Loading cluster: ha-219230
	I0122 20:21:25.432851  269066 notify.go:220] Checking for updates...
	I0122 20:21:25.433274  269066 config.go:182] Loaded profile config "ha-219230": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 20:21:25.433305  269066 status.go:174] checking status of ha-219230 ...
	I0122 20:21:25.433792  269066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:21:25.433862  269066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:21:25.451262  269066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0122 20:21:25.451726  269066 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:21:25.452476  269066 main.go:141] libmachine: Using API Version  1
	I0122 20:21:25.452508  269066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:21:25.452985  269066 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:21:25.453265  269066 main.go:141] libmachine: (ha-219230) Calling .GetState
	I0122 20:21:25.455184  269066 status.go:371] ha-219230 host status = "Running" (err=<nil>)
	I0122 20:21:25.455227  269066 host.go:66] Checking if "ha-219230" exists ...
	I0122 20:21:25.455595  269066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:21:25.455667  269066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:21:25.473151  269066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I0122 20:21:25.473754  269066 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:21:25.474572  269066 main.go:141] libmachine: Using API Version  1
	I0122 20:21:25.474608  269066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:21:25.475021  269066 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:21:25.475257  269066 main.go:141] libmachine: (ha-219230) Calling .GetIP
	I0122 20:21:25.479211  269066 main.go:141] libmachine: (ha-219230) DBG | domain ha-219230 has defined MAC address 52:54:00:f3:f3:84 in network mk-ha-219230
	I0122 20:21:25.479742  269066 main.go:141] libmachine: (ha-219230) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:f3:84", ip: ""} in network mk-ha-219230: {Iface:virbr1 ExpiryTime:2025-01-22 21:15:15 +0000 UTC Type:0 Mac:52:54:00:f3:f3:84 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:ha-219230 Clientid:01:52:54:00:f3:f3:84}
	I0122 20:21:25.479782  269066 main.go:141] libmachine: (ha-219230) DBG | domain ha-219230 has defined IP address 192.168.39.124 and MAC address 52:54:00:f3:f3:84 in network mk-ha-219230
	I0122 20:21:25.480065  269066 host.go:66] Checking if "ha-219230" exists ...
	I0122 20:21:25.480398  269066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:21:25.480451  269066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:21:25.496982  269066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46321
	I0122 20:21:25.497571  269066 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:21:25.498278  269066 main.go:141] libmachine: Using API Version  1
	I0122 20:21:25.498308  269066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:21:25.498694  269066 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:21:25.498895  269066 main.go:141] libmachine: (ha-219230) Calling .DriverName
	I0122 20:21:25.499108  269066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0122 20:21:25.499154  269066 main.go:141] libmachine: (ha-219230) Calling .GetSSHHostname
	I0122 20:21:25.502275  269066 main.go:141] libmachine: (ha-219230) DBG | domain ha-219230 has defined MAC address 52:54:00:f3:f3:84 in network mk-ha-219230
	I0122 20:21:25.502730  269066 main.go:141] libmachine: (ha-219230) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:f3:84", ip: ""} in network mk-ha-219230: {Iface:virbr1 ExpiryTime:2025-01-22 21:15:15 +0000 UTC Type:0 Mac:52:54:00:f3:f3:84 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:ha-219230 Clientid:01:52:54:00:f3:f3:84}
	I0122 20:21:25.502766  269066 main.go:141] libmachine: (ha-219230) DBG | domain ha-219230 has defined IP address 192.168.39.124 and MAC address 52:54:00:f3:f3:84 in network mk-ha-219230
	I0122 20:21:25.502920  269066 main.go:141] libmachine: (ha-219230) Calling .GetSSHPort
	I0122 20:21:25.503153  269066 main.go:141] libmachine: (ha-219230) Calling .GetSSHKeyPath
	I0122 20:21:25.503310  269066 main.go:141] libmachine: (ha-219230) Calling .GetSSHUsername
	I0122 20:21:25.503474  269066 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/ha-219230/id_rsa Username:docker}
	I0122 20:21:25.605588  269066 ssh_runner.go:195] Run: systemctl --version
	I0122 20:21:25.614514  269066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 20:21:25.634286  269066 kubeconfig.go:125] found "ha-219230" server: "https://192.168.39.254:8443"
	I0122 20:21:25.634343  269066 api_server.go:166] Checking apiserver status ...
	I0122 20:21:25.634401  269066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 20:21:25.652073  269066 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup
	W0122 20:21:25.665403  269066 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0122 20:21:25.665488  269066 ssh_runner.go:195] Run: ls
	I0122 20:21:25.675992  269066 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0122 20:21:25.682319  269066 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0122 20:21:25.682359  269066 status.go:463] ha-219230 apiserver status = Running (err=<nil>)
	I0122 20:21:25.682372  269066 status.go:176] ha-219230 status: &{Name:ha-219230 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:21:25.682393  269066 status.go:174] checking status of ha-219230-m02 ...
	I0122 20:21:25.682723  269066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:21:25.682770  269066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:21:25.699410  269066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39551
	I0122 20:21:25.699903  269066 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:21:25.700450  269066 main.go:141] libmachine: Using API Version  1
	I0122 20:21:25.700502  269066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:21:25.700833  269066 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:21:25.701070  269066 main.go:141] libmachine: (ha-219230-m02) Calling .GetState
	I0122 20:21:25.702836  269066 status.go:371] ha-219230-m02 host status = "Stopped" (err=<nil>)
	I0122 20:21:25.702855  269066 status.go:384] host is not running, skipping remaining checks
	I0122 20:21:25.702862  269066 status.go:176] ha-219230-m02 status: &{Name:ha-219230-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:21:25.702901  269066 status.go:174] checking status of ha-219230-m03 ...
	I0122 20:21:25.703356  269066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:21:25.703416  269066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:21:25.719696  269066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0122 20:21:25.720260  269066 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:21:25.720872  269066 main.go:141] libmachine: Using API Version  1
	I0122 20:21:25.720906  269066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:21:25.721295  269066 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:21:25.721553  269066 main.go:141] libmachine: (ha-219230-m03) Calling .GetState
	I0122 20:21:25.723436  269066 status.go:371] ha-219230-m03 host status = "Running" (err=<nil>)
	I0122 20:21:25.723463  269066 host.go:66] Checking if "ha-219230-m03" exists ...
	I0122 20:21:25.723812  269066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:21:25.723883  269066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:21:25.740061  269066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39327
	I0122 20:21:25.740674  269066 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:21:25.741364  269066 main.go:141] libmachine: Using API Version  1
	I0122 20:21:25.741396  269066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:21:25.741752  269066 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:21:25.741924  269066 main.go:141] libmachine: (ha-219230-m03) Calling .GetIP
	I0122 20:21:25.745317  269066 main.go:141] libmachine: (ha-219230-m03) DBG | domain ha-219230-m03 has defined MAC address 52:54:00:f0:4e:2e in network mk-ha-219230
	I0122 20:21:25.745729  269066 main.go:141] libmachine: (ha-219230-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:4e:2e", ip: ""} in network mk-ha-219230: {Iface:virbr1 ExpiryTime:2025-01-22 21:17:26 +0000 UTC Type:0 Mac:52:54:00:f0:4e:2e Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-219230-m03 Clientid:01:52:54:00:f0:4e:2e}
	I0122 20:21:25.745777  269066 main.go:141] libmachine: (ha-219230-m03) DBG | domain ha-219230-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:f0:4e:2e in network mk-ha-219230
	I0122 20:21:25.745911  269066 host.go:66] Checking if "ha-219230-m03" exists ...
	I0122 20:21:25.746280  269066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:21:25.746334  269066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:21:25.763562  269066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42361
	I0122 20:21:25.764048  269066 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:21:25.764666  269066 main.go:141] libmachine: Using API Version  1
	I0122 20:21:25.764695  269066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:21:25.765090  269066 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:21:25.765318  269066 main.go:141] libmachine: (ha-219230-m03) Calling .DriverName
	I0122 20:21:25.765488  269066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0122 20:21:25.765509  269066 main.go:141] libmachine: (ha-219230-m03) Calling .GetSSHHostname
	I0122 20:21:25.768866  269066 main.go:141] libmachine: (ha-219230-m03) DBG | domain ha-219230-m03 has defined MAC address 52:54:00:f0:4e:2e in network mk-ha-219230
	I0122 20:21:25.769337  269066 main.go:141] libmachine: (ha-219230-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:4e:2e", ip: ""} in network mk-ha-219230: {Iface:virbr1 ExpiryTime:2025-01-22 21:17:26 +0000 UTC Type:0 Mac:52:54:00:f0:4e:2e Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-219230-m03 Clientid:01:52:54:00:f0:4e:2e}
	I0122 20:21:25.769371  269066 main.go:141] libmachine: (ha-219230-m03) DBG | domain ha-219230-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:f0:4e:2e in network mk-ha-219230
	I0122 20:21:25.769591  269066 main.go:141] libmachine: (ha-219230-m03) Calling .GetSSHPort
	I0122 20:21:25.769800  269066 main.go:141] libmachine: (ha-219230-m03) Calling .GetSSHKeyPath
	I0122 20:21:25.769986  269066 main.go:141] libmachine: (ha-219230-m03) Calling .GetSSHUsername
	I0122 20:21:25.770133  269066 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/ha-219230-m03/id_rsa Username:docker}
	I0122 20:21:25.856443  269066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 20:21:25.879177  269066 kubeconfig.go:125] found "ha-219230" server: "https://192.168.39.254:8443"
	I0122 20:21:25.879217  269066 api_server.go:166] Checking apiserver status ...
	I0122 20:21:25.879267  269066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 20:21:25.896441  269066 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup
	W0122 20:21:25.909796  269066 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0122 20:21:25.909887  269066 ssh_runner.go:195] Run: ls
	I0122 20:21:25.916007  269066 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0122 20:21:25.922780  269066 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0122 20:21:25.922823  269066 status.go:463] ha-219230-m03 apiserver status = Running (err=<nil>)
	I0122 20:21:25.922836  269066 status.go:176] ha-219230-m03 status: &{Name:ha-219230-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:21:25.922865  269066 status.go:174] checking status of ha-219230-m04 ...
	I0122 20:21:25.923365  269066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:21:25.923478  269066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:21:25.941416  269066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40365
	I0122 20:21:25.941914  269066 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:21:25.942445  269066 main.go:141] libmachine: Using API Version  1
	I0122 20:21:25.942474  269066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:21:25.942836  269066 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:21:25.943091  269066 main.go:141] libmachine: (ha-219230-m04) Calling .GetState
	I0122 20:21:25.944908  269066 status.go:371] ha-219230-m04 host status = "Running" (err=<nil>)
	I0122 20:21:25.944934  269066 host.go:66] Checking if "ha-219230-m04" exists ...
	I0122 20:21:25.945260  269066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:21:25.945315  269066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:21:25.962355  269066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34275
	I0122 20:21:25.962964  269066 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:21:25.963613  269066 main.go:141] libmachine: Using API Version  1
	I0122 20:21:25.963646  269066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:21:25.964055  269066 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:21:25.964308  269066 main.go:141] libmachine: (ha-219230-m04) Calling .GetIP
	I0122 20:21:25.968212  269066 main.go:141] libmachine: (ha-219230-m04) DBG | domain ha-219230-m04 has defined MAC address 52:54:00:bb:eb:fe in network mk-ha-219230
	I0122 20:21:25.968776  269066 main.go:141] libmachine: (ha-219230-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:eb:fe", ip: ""} in network mk-ha-219230: {Iface:virbr1 ExpiryTime:2025-01-22 21:18:59 +0000 UTC Type:0 Mac:52:54:00:bb:eb:fe Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-219230-m04 Clientid:01:52:54:00:bb:eb:fe}
	I0122 20:21:25.968796  269066 main.go:141] libmachine: (ha-219230-m04) DBG | domain ha-219230-m04 has defined IP address 192.168.39.118 and MAC address 52:54:00:bb:eb:fe in network mk-ha-219230
	I0122 20:21:25.969110  269066 host.go:66] Checking if "ha-219230-m04" exists ...
	I0122 20:21:25.969576  269066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:21:25.969637  269066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:21:25.986613  269066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0122 20:21:25.987236  269066 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:21:25.987846  269066 main.go:141] libmachine: Using API Version  1
	I0122 20:21:25.987873  269066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:21:25.988262  269066 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:21:25.988484  269066 main.go:141] libmachine: (ha-219230-m04) Calling .DriverName
	I0122 20:21:25.988668  269066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0122 20:21:25.988691  269066 main.go:141] libmachine: (ha-219230-m04) Calling .GetSSHHostname
	I0122 20:21:25.991843  269066 main.go:141] libmachine: (ha-219230-m04) DBG | domain ha-219230-m04 has defined MAC address 52:54:00:bb:eb:fe in network mk-ha-219230
	I0122 20:21:25.992341  269066 main.go:141] libmachine: (ha-219230-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:eb:fe", ip: ""} in network mk-ha-219230: {Iface:virbr1 ExpiryTime:2025-01-22 21:18:59 +0000 UTC Type:0 Mac:52:54:00:bb:eb:fe Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-219230-m04 Clientid:01:52:54:00:bb:eb:fe}
	I0122 20:21:25.992390  269066 main.go:141] libmachine: (ha-219230-m04) DBG | domain ha-219230-m04 has defined IP address 192.168.39.118 and MAC address 52:54:00:bb:eb:fe in network mk-ha-219230
	I0122 20:21:25.992525  269066 main.go:141] libmachine: (ha-219230-m04) Calling .GetSSHPort
	I0122 20:21:25.992713  269066 main.go:141] libmachine: (ha-219230-m04) Calling .GetSSHKeyPath
	I0122 20:21:25.992868  269066 main.go:141] libmachine: (ha-219230-m04) Calling .GetSSHUsername
	I0122 20:21:25.993022  269066 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/ha-219230-m04/id_rsa Username:docker}
	I0122 20:21:26.081070  269066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 20:21:26.099802  269066 status.go:176] ha-219230-m04 status: &{Name:ha-219230-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 node start m02 -v=7 --alsologtostderr
E0122 20:21:48.239487  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-219230 node start m02 -v=7 --alsologtostderr: (52.943978341s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (53.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (442.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-219230 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-219230 -v=7 --alsologtostderr
E0122 20:24:04.376513  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:24:32.081332  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:25:50.257168  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-219230 -v=7 --alsologtostderr: (4m34.162107934s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-219230 --wait=true -v=7 --alsologtostderr
E0122 20:27:13.335697  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:29:04.376844  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-219230 --wait=true -v=7 --alsologtostderr: (2m47.865758385s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-219230
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (442.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-219230 node delete m03 -v=7 --alsologtostderr: (18.179836761s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (19.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (273.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 stop -v=7 --alsologtostderr
E0122 20:30:50.257736  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:34:04.377094  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-219230 stop -v=7 --alsologtostderr: (4m32.967470449s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-219230 status -v=7 --alsologtostderr: exit status 7 (125.249579ms)

                                                
                                                
-- stdout --
	ha-219230
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-219230-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-219230-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:34:36.722655  273319 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:34:36.722785  273319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:34:36.722794  273319 out.go:358] Setting ErrFile to fd 2...
	I0122 20:34:36.722798  273319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:34:36.723017  273319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 20:34:36.723249  273319 out.go:352] Setting JSON to false
	I0122 20:34:36.723290  273319 mustload.go:65] Loading cluster: ha-219230
	I0122 20:34:36.723340  273319 notify.go:220] Checking for updates...
	I0122 20:34:36.723797  273319 config.go:182] Loaded profile config "ha-219230": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 20:34:36.723827  273319 status.go:174] checking status of ha-219230 ...
	I0122 20:34:36.724281  273319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:34:36.724336  273319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:34:36.743055  273319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0122 20:34:36.743609  273319 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:34:36.744319  273319 main.go:141] libmachine: Using API Version  1
	I0122 20:34:36.744355  273319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:34:36.744759  273319 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:34:36.745029  273319 main.go:141] libmachine: (ha-219230) Calling .GetState
	I0122 20:34:36.747240  273319 status.go:371] ha-219230 host status = "Stopped" (err=<nil>)
	I0122 20:34:36.747265  273319 status.go:384] host is not running, skipping remaining checks
	I0122 20:34:36.747273  273319 status.go:176] ha-219230 status: &{Name:ha-219230 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:34:36.747298  273319 status.go:174] checking status of ha-219230-m02 ...
	I0122 20:34:36.747633  273319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:34:36.747683  273319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:34:36.764293  273319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38903
	I0122 20:34:36.764813  273319 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:34:36.765393  273319 main.go:141] libmachine: Using API Version  1
	I0122 20:34:36.765423  273319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:34:36.765752  273319 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:34:36.765949  273319 main.go:141] libmachine: (ha-219230-m02) Calling .GetState
	I0122 20:34:36.767602  273319 status.go:371] ha-219230-m02 host status = "Stopped" (err=<nil>)
	I0122 20:34:36.767624  273319 status.go:384] host is not running, skipping remaining checks
	I0122 20:34:36.767631  273319 status.go:176] ha-219230-m02 status: &{Name:ha-219230-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:34:36.767669  273319 status.go:174] checking status of ha-219230-m04 ...
	I0122 20:34:36.768005  273319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:34:36.768050  273319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:34:36.784127  273319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I0122 20:34:36.784676  273319 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:34:36.785323  273319 main.go:141] libmachine: Using API Version  1
	I0122 20:34:36.785348  273319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:34:36.785723  273319 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:34:36.785997  273319 main.go:141] libmachine: (ha-219230-m04) Calling .GetState
	I0122 20:34:36.788197  273319 status.go:371] ha-219230-m04 host status = "Stopped" (err=<nil>)
	I0122 20:34:36.788221  273319 status.go:384] host is not running, skipping remaining checks
	I0122 20:34:36.788229  273319 status.go:176] ha-219230-m04 status: &{Name:ha-219230-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (273.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (122.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-219230 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0122 20:35:27.445245  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:35:50.257374  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-219230 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m1.249498157s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (122.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-219230 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-219230 --control-plane -v=7 --alsologtostderr: (1m20.771799815s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-219230 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.018021709s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (90.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-592218 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0122 20:39:04.378457  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-592218 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m30.887098776s)
--- PASS: TestJSONOutput/start/Command (90.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-592218 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-592218 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.46s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-592218 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-592218 --output=json --user=testUser: (7.461470354s)
--- PASS: TestJSONOutput/stop/Command (7.46s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-027939 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-027939 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.039103ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"df48c591-cdd3-4389-8578-0aab7f9c6f1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-027939] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1d9445d-0581-40b0-9a8b-13cbf2016ecf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20288"}}
	{"specversion":"1.0","id":"28bf1316-cf85-47e2-8a43-29c64f4365a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"90a1f38e-fa8f-418a-9c61-4391a341d162","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig"}}
	{"specversion":"1.0","id":"b8e9021c-789c-4804-9fda-1c70baa4569c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube"}}
	{"specversion":"1.0","id":"85a8fd1f-ab2b-4c28-b28c-b5152472a245","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"db107408-5c43-4ba2-be92-1c7ec9c86c9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cfa00374-a0c7-43ab-bff6-3877de3fe235","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-027939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-027939
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (97.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-360773 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-360773 --driver=kvm2  --container-runtime=crio: (45.211204794s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-377002 --driver=kvm2  --container-runtime=crio
E0122 20:40:50.262509  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-377002 --driver=kvm2  --container-runtime=crio: (48.481567986s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-360773
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-377002
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-377002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-377002
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-377002: (1.101266214s)
helpers_test.go:175: Cleaning up "first-360773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-360773
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-360773: (1.094917915s)
--- PASS: TestMinikubeProfile (97.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-994460 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-994460 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.218889327s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-994460 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-994460 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-016352 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-016352 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.057760138s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-016352 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-016352 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.96s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-994460 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-016352 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-016352 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-016352
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-016352: (1.391068712s)
--- PASS: TestMountStart/serial/Stop (1.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (27.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-016352
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-016352: (26.146715169s)
--- PASS: TestMountStart/serial/RestartStopped (27.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-016352 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-016352 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-330484 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0122 20:43:53.338060  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:44:04.377080  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-330484 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.746424073s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-330484 -- rollout status deployment/busybox: (4.004943109s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- exec busybox-58667487b6-7rsfs -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- exec busybox-58667487b6-cdfmm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- exec busybox-58667487b6-7rsfs -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- exec busybox-58667487b6-cdfmm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- exec busybox-58667487b6-7rsfs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- exec busybox-58667487b6-cdfmm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- exec busybox-58667487b6-7rsfs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- exec busybox-58667487b6-7rsfs -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- exec busybox-58667487b6-cdfmm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-330484 -- exec busybox-58667487b6-cdfmm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-330484 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-330484 -v 3 --alsologtostderr: (53.099109732s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-330484 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 cp testdata/cp-test.txt multinode-330484:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484 "sudo cat /home/docker/cp-test.txt"
E0122 20:45:50.257844  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 cp multinode-330484:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3053864028/001/cp-test_multinode-330484.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 cp multinode-330484:/home/docker/cp-test.txt multinode-330484-m02:/home/docker/cp-test_multinode-330484_multinode-330484-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484-m02 "sudo cat /home/docker/cp-test_multinode-330484_multinode-330484-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 cp multinode-330484:/home/docker/cp-test.txt multinode-330484-m03:/home/docker/cp-test_multinode-330484_multinode-330484-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484-m03 "sudo cat /home/docker/cp-test_multinode-330484_multinode-330484-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 cp testdata/cp-test.txt multinode-330484-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 cp multinode-330484-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3053864028/001/cp-test_multinode-330484-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 cp multinode-330484-m02:/home/docker/cp-test.txt multinode-330484:/home/docker/cp-test_multinode-330484-m02_multinode-330484.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484 "sudo cat /home/docker/cp-test_multinode-330484-m02_multinode-330484.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 cp multinode-330484-m02:/home/docker/cp-test.txt multinode-330484-m03:/home/docker/cp-test_multinode-330484-m02_multinode-330484-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484-m03 "sudo cat /home/docker/cp-test_multinode-330484-m02_multinode-330484-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 cp testdata/cp-test.txt multinode-330484-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 cp multinode-330484-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3053864028/001/cp-test_multinode-330484-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 cp multinode-330484-m03:/home/docker/cp-test.txt multinode-330484:/home/docker/cp-test_multinode-330484-m03_multinode-330484.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484 "sudo cat /home/docker/cp-test_multinode-330484-m03_multinode-330484.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 cp multinode-330484-m03:/home/docker/cp-test.txt multinode-330484-m02:/home/docker/cp-test_multinode-330484-m03_multinode-330484-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 ssh -n multinode-330484-m02 "sudo cat /home/docker/cp-test_multinode-330484-m03_multinode-330484-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-330484 node stop m03: (2.311093275s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-330484 status: exit status 7 (475.461867ms)

                                                
                                                
-- stdout --
	multinode-330484
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-330484-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-330484-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-330484 status --alsologtostderr: exit status 7 (491.542604ms)

                                                
                                                
-- stdout --
	multinode-330484
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-330484-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-330484-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:46:00.230335  281255 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:46:00.230482  281255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:46:00.230492  281255 out.go:358] Setting ErrFile to fd 2...
	I0122 20:46:00.230496  281255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:46:00.230731  281255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 20:46:00.230937  281255 out.go:352] Setting JSON to false
	I0122 20:46:00.230979  281255 mustload.go:65] Loading cluster: multinode-330484
	I0122 20:46:00.231071  281255 notify.go:220] Checking for updates...
	I0122 20:46:00.231561  281255 config.go:182] Loaded profile config "multinode-330484": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 20:46:00.231595  281255 status.go:174] checking status of multinode-330484 ...
	I0122 20:46:00.232106  281255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:46:00.232257  281255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:46:00.255725  281255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0122 20:46:00.256300  281255 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:46:00.257148  281255 main.go:141] libmachine: Using API Version  1
	I0122 20:46:00.257185  281255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:46:00.257564  281255 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:46:00.257807  281255 main.go:141] libmachine: (multinode-330484) Calling .GetState
	I0122 20:46:00.260808  281255 status.go:371] multinode-330484 host status = "Running" (err=<nil>)
	I0122 20:46:00.260856  281255 host.go:66] Checking if "multinode-330484" exists ...
	I0122 20:46:00.261447  281255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:46:00.261597  281255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:46:00.279626  281255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38689
	I0122 20:46:00.280215  281255 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:46:00.280706  281255 main.go:141] libmachine: Using API Version  1
	I0122 20:46:00.280731  281255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:46:00.281079  281255 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:46:00.281309  281255 main.go:141] libmachine: (multinode-330484) Calling .GetIP
	I0122 20:46:00.284696  281255 main.go:141] libmachine: (multinode-330484) DBG | domain multinode-330484 has defined MAC address 52:54:00:86:76:f9 in network mk-multinode-330484
	I0122 20:46:00.285177  281255 main.go:141] libmachine: (multinode-330484) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:76:f9", ip: ""} in network mk-multinode-330484: {Iface:virbr1 ExpiryTime:2025-01-22 21:43:08 +0000 UTC Type:0 Mac:52:54:00:86:76:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-330484 Clientid:01:52:54:00:86:76:f9}
	I0122 20:46:00.285211  281255 main.go:141] libmachine: (multinode-330484) DBG | domain multinode-330484 has defined IP address 192.168.39.5 and MAC address 52:54:00:86:76:f9 in network mk-multinode-330484
	I0122 20:46:00.285448  281255 host.go:66] Checking if "multinode-330484" exists ...
	I0122 20:46:00.285808  281255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:46:00.285885  281255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:46:00.302862  281255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0122 20:46:00.303348  281255 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:46:00.303959  281255 main.go:141] libmachine: Using API Version  1
	I0122 20:46:00.303987  281255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:46:00.304321  281255 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:46:00.304577  281255 main.go:141] libmachine: (multinode-330484) Calling .DriverName
	I0122 20:46:00.304784  281255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0122 20:46:00.304808  281255 main.go:141] libmachine: (multinode-330484) Calling .GetSSHHostname
	I0122 20:46:00.309198  281255 main.go:141] libmachine: (multinode-330484) DBG | domain multinode-330484 has defined MAC address 52:54:00:86:76:f9 in network mk-multinode-330484
	I0122 20:46:00.309726  281255 main.go:141] libmachine: (multinode-330484) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:76:f9", ip: ""} in network mk-multinode-330484: {Iface:virbr1 ExpiryTime:2025-01-22 21:43:08 +0000 UTC Type:0 Mac:52:54:00:86:76:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-330484 Clientid:01:52:54:00:86:76:f9}
	I0122 20:46:00.309751  281255 main.go:141] libmachine: (multinode-330484) DBG | domain multinode-330484 has defined IP address 192.168.39.5 and MAC address 52:54:00:86:76:f9 in network mk-multinode-330484
	I0122 20:46:00.309998  281255 main.go:141] libmachine: (multinode-330484) Calling .GetSSHPort
	I0122 20:46:00.310273  281255 main.go:141] libmachine: (multinode-330484) Calling .GetSSHKeyPath
	I0122 20:46:00.310459  281255 main.go:141] libmachine: (multinode-330484) Calling .GetSSHUsername
	I0122 20:46:00.310634  281255 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/multinode-330484/id_rsa Username:docker}
	I0122 20:46:00.395362  281255 ssh_runner.go:195] Run: systemctl --version
	I0122 20:46:00.403720  281255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 20:46:00.421594  281255 kubeconfig.go:125] found "multinode-330484" server: "https://192.168.39.5:8443"
	I0122 20:46:00.421639  281255 api_server.go:166] Checking apiserver status ...
	I0122 20:46:00.421686  281255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 20:46:00.443402  281255 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1095/cgroup
	W0122 20:46:00.456280  281255 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1095/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0122 20:46:00.456362  281255 ssh_runner.go:195] Run: ls
	I0122 20:46:00.462778  281255 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I0122 20:46:00.468212  281255 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I0122 20:46:00.468250  281255 status.go:463] multinode-330484 apiserver status = Running (err=<nil>)
	I0122 20:46:00.468265  281255 status.go:176] multinode-330484 status: &{Name:multinode-330484 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:46:00.468306  281255 status.go:174] checking status of multinode-330484-m02 ...
	I0122 20:46:00.468634  281255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:46:00.468717  281255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:46:00.485612  281255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0122 20:46:00.486223  281255 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:46:00.486853  281255 main.go:141] libmachine: Using API Version  1
	I0122 20:46:00.486879  281255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:46:00.487290  281255 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:46:00.487514  281255 main.go:141] libmachine: (multinode-330484-m02) Calling .GetState
	I0122 20:46:00.489498  281255 status.go:371] multinode-330484-m02 host status = "Running" (err=<nil>)
	I0122 20:46:00.489529  281255 host.go:66] Checking if "multinode-330484-m02" exists ...
	I0122 20:46:00.489865  281255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:46:00.489911  281255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:46:00.507066  281255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43427
	I0122 20:46:00.507631  281255 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:46:00.508251  281255 main.go:141] libmachine: Using API Version  1
	I0122 20:46:00.508285  281255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:46:00.508833  281255 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:46:00.509073  281255 main.go:141] libmachine: (multinode-330484-m02) Calling .GetIP
	I0122 20:46:00.512100  281255 main.go:141] libmachine: (multinode-330484-m02) DBG | domain multinode-330484-m02 has defined MAC address 52:54:00:07:96:55 in network mk-multinode-330484
	I0122 20:46:00.512543  281255 main.go:141] libmachine: (multinode-330484-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:96:55", ip: ""} in network mk-multinode-330484: {Iface:virbr1 ExpiryTime:2025-01-22 21:44:11 +0000 UTC Type:0 Mac:52:54:00:07:96:55 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-330484-m02 Clientid:01:52:54:00:07:96:55}
	I0122 20:46:00.512595  281255 main.go:141] libmachine: (multinode-330484-m02) DBG | domain multinode-330484-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:07:96:55 in network mk-multinode-330484
	I0122 20:46:00.512882  281255 host.go:66] Checking if "multinode-330484-m02" exists ...
	I0122 20:46:00.513301  281255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:46:00.513356  281255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:46:00.530445  281255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46763
	I0122 20:46:00.530945  281255 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:46:00.531457  281255 main.go:141] libmachine: Using API Version  1
	I0122 20:46:00.531484  281255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:46:00.531812  281255 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:46:00.532032  281255 main.go:141] libmachine: (multinode-330484-m02) Calling .DriverName
	I0122 20:46:00.532255  281255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0122 20:46:00.532290  281255 main.go:141] libmachine: (multinode-330484-m02) Calling .GetSSHHostname
	I0122 20:46:00.535966  281255 main.go:141] libmachine: (multinode-330484-m02) DBG | domain multinode-330484-m02 has defined MAC address 52:54:00:07:96:55 in network mk-multinode-330484
	I0122 20:46:00.536367  281255 main.go:141] libmachine: (multinode-330484-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:96:55", ip: ""} in network mk-multinode-330484: {Iface:virbr1 ExpiryTime:2025-01-22 21:44:11 +0000 UTC Type:0 Mac:52:54:00:07:96:55 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-330484-m02 Clientid:01:52:54:00:07:96:55}
	I0122 20:46:00.536398  281255 main.go:141] libmachine: (multinode-330484-m02) DBG | domain multinode-330484-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:07:96:55 in network mk-multinode-330484
	I0122 20:46:00.536752  281255 main.go:141] libmachine: (multinode-330484-m02) Calling .GetSSHPort
	I0122 20:46:00.536994  281255 main.go:141] libmachine: (multinode-330484-m02) Calling .GetSSHKeyPath
	I0122 20:46:00.537165  281255 main.go:141] libmachine: (multinode-330484-m02) Calling .GetSSHUsername
	I0122 20:46:00.537371  281255 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-247142/.minikube/machines/multinode-330484-m02/id_rsa Username:docker}
	I0122 20:46:00.623033  281255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 20:46:00.638665  281255 status.go:176] multinode-330484-m02 status: &{Name:multinode-330484-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:46:00.638752  281255 status.go:174] checking status of multinode-330484-m03 ...
	I0122 20:46:00.639154  281255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:46:00.639214  281255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:46:00.657196  281255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I0122 20:46:00.657787  281255 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:46:00.658395  281255 main.go:141] libmachine: Using API Version  1
	I0122 20:46:00.658432  281255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:46:00.658830  281255 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:46:00.659055  281255 main.go:141] libmachine: (multinode-330484-m03) Calling .GetState
	I0122 20:46:00.660894  281255 status.go:371] multinode-330484-m03 host status = "Stopped" (err=<nil>)
	I0122 20:46:00.660916  281255 status.go:384] host is not running, skipping remaining checks
	I0122 20:46:00.660924  281255 status.go:176] multinode-330484-m03 status: &{Name:multinode-330484-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-330484 node start m03 -v=7 --alsologtostderr: (41.777809466s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (348.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-330484
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-330484
E0122 20:49:04.380188  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-330484: (3m3.592885306s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-330484 --wait=true -v=8 --alsologtostderr
E0122 20:50:50.257806  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:52:07.447000  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-330484 --wait=true -v=8 --alsologtostderr: (2m45.069613165s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-330484
--- PASS: TestMultiNode/serial/RestartKeepsNodes (348.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-330484 node delete m03: (2.351110738s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 stop
E0122 20:54:04.378435  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-330484 stop: (3m1.776207827s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-330484 status: exit status 7 (99.729075ms)

                                                
                                                
-- stdout --
	multinode-330484
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-330484-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-330484 status --alsologtostderr: exit status 7 (95.27957ms)

                                                
                                                
-- stdout --
	multinode-330484
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-330484-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:55:36.794432  284326 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:55:36.794559  284326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:55:36.794568  284326 out.go:358] Setting ErrFile to fd 2...
	I0122 20:55:36.794573  284326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:55:36.794831  284326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 20:55:36.795056  284326 out.go:352] Setting JSON to false
	I0122 20:55:36.795093  284326 mustload.go:65] Loading cluster: multinode-330484
	I0122 20:55:36.795139  284326 notify.go:220] Checking for updates...
	I0122 20:55:36.795548  284326 config.go:182] Loaded profile config "multinode-330484": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 20:55:36.795574  284326 status.go:174] checking status of multinode-330484 ...
	I0122 20:55:36.796049  284326 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:55:36.796119  284326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:55:36.812363  284326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39591
	I0122 20:55:36.812836  284326 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:55:36.813564  284326 main.go:141] libmachine: Using API Version  1
	I0122 20:55:36.813587  284326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:55:36.814020  284326 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:55:36.814319  284326 main.go:141] libmachine: (multinode-330484) Calling .GetState
	I0122 20:55:36.816018  284326 status.go:371] multinode-330484 host status = "Stopped" (err=<nil>)
	I0122 20:55:36.816042  284326 status.go:384] host is not running, skipping remaining checks
	I0122 20:55:36.816051  284326 status.go:176] multinode-330484 status: &{Name:multinode-330484 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:55:36.816098  284326 status.go:174] checking status of multinode-330484-m02 ...
	I0122 20:55:36.816577  284326 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0122 20:55:36.816640  284326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:55:36.832588  284326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37449
	I0122 20:55:36.833151  284326 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:55:36.833698  284326 main.go:141] libmachine: Using API Version  1
	I0122 20:55:36.833722  284326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:55:36.834071  284326 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:55:36.834302  284326 main.go:141] libmachine: (multinode-330484-m02) Calling .GetState
	I0122 20:55:36.836147  284326 status.go:371] multinode-330484-m02 host status = "Stopped" (err=<nil>)
	I0122 20:55:36.836163  284326 status.go:384] host is not running, skipping remaining checks
	I0122 20:55:36.836169  284326 status.go:176] multinode-330484-m02 status: &{Name:multinode-330484-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (180.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-330484 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0122 20:55:50.257356  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-330484 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m0.044696212s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-330484 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (180.63s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-330484
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-330484-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-330484-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.009904ms)

                                                
                                                
-- stdout --
	* [multinode-330484-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-330484-m02' is duplicated with machine name 'multinode-330484-m02' in profile 'multinode-330484'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-330484-m03 --driver=kvm2  --container-runtime=crio
E0122 20:59:04.376776  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-330484-m03 --driver=kvm2  --container-runtime=crio: (45.194615154s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-330484
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-330484: exit status 80 (237.955715ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-330484 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-330484-m03 already exists in multinode-330484-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-330484-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-330484-m03: (1.083859763s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.65s)

                                                
                                    
x
+
TestScheduledStopUnix (117.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-257900 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-257900 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.229651593s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-257900 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-257900 -n scheduled-stop-257900
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-257900 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0122 21:05:04.470392  254754 retry.go:31] will retry after 73.191µs: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.471602  254754 retry.go:31] will retry after 105.834µs: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.472779  254754 retry.go:31] will retry after 144.46µs: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.473934  254754 retry.go:31] will retry after 497.885µs: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.475110  254754 retry.go:31] will retry after 694.659µs: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.476244  254754 retry.go:31] will retry after 941.35µs: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.477420  254754 retry.go:31] will retry after 986.127µs: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.478559  254754 retry.go:31] will retry after 1.836391ms: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.480819  254754 retry.go:31] will retry after 3.392797ms: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.485106  254754 retry.go:31] will retry after 4.203751ms: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.490433  254754 retry.go:31] will retry after 4.529277ms: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.495735  254754 retry.go:31] will retry after 6.086505ms: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.501987  254754 retry.go:31] will retry after 16.83647ms: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.519335  254754 retry.go:31] will retry after 28.835667ms: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
I0122 21:05:04.548638  254754 retry.go:31] will retry after 21.02136ms: open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/scheduled-stop-257900/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-257900 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-257900 -n scheduled-stop-257900
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-257900
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-257900 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0122 21:05:50.261660  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-257900
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-257900: exit status 7 (78.473692ms)

                                                
                                                
-- stdout --
	scheduled-stop-257900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-257900 -n scheduled-stop-257900
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-257900 -n scheduled-stop-257900: exit status 7 (76.569307ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-257900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-257900
--- PASS: TestScheduledStopUnix (117.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (157.83s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1201955410 start -p running-upgrade-484181 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1201955410 start -p running-upgrade-484181 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m6.014935422s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-484181 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-484181 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m29.802105253s)
helpers_test.go:175: Cleaning up "running-upgrade-484181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-484181
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-484181: (1.347227299s)
--- PASS: TestRunningBinaryUpgrade (157.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-347686 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-347686 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (102.875392ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-347686] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (126.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-347686 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-347686 --driver=kvm2  --container-runtime=crio: (2m5.818761405s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-347686 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (126.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-804887 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-804887 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (152.035674ms)

                                                
                                                
-- stdout --
	* [false-804887] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 21:07:59.131663  290617 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:07:59.131791  290617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:07:59.131802  290617 out.go:358] Setting ErrFile to fd 2...
	I0122 21:07:59.131808  290617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:07:59.132161  290617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-247142/.minikube/bin
	I0122 21:07:59.133031  290617 out.go:352] Setting JSON to false
	I0122 21:07:59.134111  290617 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":13825,"bootTime":1737566254,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:07:59.134224  290617 start.go:139] virtualization: kvm guest
	I0122 21:07:59.136951  290617 out.go:177] * [false-804887] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:07:59.138519  290617 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:07:59.138534  290617 notify.go:220] Checking for updates...
	I0122 21:07:59.143289  290617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:07:59.145430  290617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-247142/kubeconfig
	I0122 21:07:59.147030  290617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-247142/.minikube
	I0122 21:07:59.148859  290617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:07:59.150394  290617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:07:59.152481  290617 config.go:182] Loaded profile config "NoKubernetes-347686": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:07:59.152631  290617 config.go:182] Loaded profile config "cert-expiration-673511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:07:59.152760  290617 config.go:182] Loaded profile config "force-systemd-flag-765715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0122 21:07:59.152962  290617 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:07:59.206508  290617 out.go:177] * Using the kvm2 driver based on user configuration
	I0122 21:07:59.207938  290617 start.go:297] selected driver: kvm2
	I0122 21:07:59.207971  290617 start.go:901] validating driver "kvm2" against <nil>
	I0122 21:07:59.208003  290617 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:07:59.210432  290617 out.go:201] 
	W0122 21:07:59.211827  290617 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0122 21:07:59.213357  290617 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-804887 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-804887

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-804887

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-804887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-804887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-804887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-804887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-804887

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-804887

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-804887

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-804887

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-804887

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-804887" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-804887" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 22 Jan 2025 21:07:02 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.149:8443
name: cert-expiration-673511
contexts:
- context:
cluster: cert-expiration-673511
extensions:
- extension:
last-update: Wed, 22 Jan 2025 21:07:02 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-673511
name: cert-expiration-673511
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-673511
user:
client-certificate: /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/cert-expiration-673511/client.crt
client-key: /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/cert-expiration-673511/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-804887

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804887"

                                                
                                                
----------------------- debugLogs end: false-804887 [took: 3.897428608s] --------------------------------
helpers_test.go:175: Cleaning up "false-804887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-804887
--- PASS: TestNetworkPlugins/group/false (4.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (40.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-347686 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-347686 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.170531112s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-347686 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-347686 status -o json: exit status 2 (278.54308ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-347686","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-347686
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (40.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-347686 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0122 21:09:04.377244  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-347686 --no-kubernetes --driver=kvm2  --container-runtime=crio: (50.553656833s)
--- PASS: TestNoKubernetes/serial/Start (50.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-347686 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-347686 "sudo systemctl is-active --quiet service kubelet": exit status 1 (244.735181ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (28.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.309770117s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.881102031s)
--- PASS: TestNoKubernetes/serial/ProfileList (28.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-347686
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-347686: (1.745966542s)
--- PASS: TestNoKubernetes/serial/Stop (1.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (29.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-347686 --driver=kvm2  --container-runtime=crio
E0122 21:10:50.257979  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-347686 --driver=kvm2  --container-runtime=crio: (29.891051682s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (29.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.222668757 start -p stopped-upgrade-928062 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.222668757 start -p stopped-upgrade-928062 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (55.017435228s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.222668757 -p stopped-upgrade-928062 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.222668757 -p stopped-upgrade-928062 stop: (12.211864044s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-928062 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-928062 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.350358975s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (122.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-347686 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-347686 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.007714ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Start (112.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-556362 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-556362 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m52.504670617s)
--- PASS: TestPause/serial/Start (112.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m15.458747791s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-804887 "pgrep -a kubelet"
I0122 21:12:46.824833  254754 config.go:182] Loaded profile config "auto-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-804887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8mm95" [c236b448-0116-4ff8-b034-c5c1239bcdae] Pending
helpers_test.go:344: "netcat-5d86dc444-8mm95" [c236b448-0116-4ff8-b034-c5c1239bcdae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8mm95" [c236b448-0116-4ff8-b034-c5c1239bcdae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004305541s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.31s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.2s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-556362 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-556362 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.158695657s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-928062
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-928062: (1.024436658s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (97.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m37.130455282s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (97.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-804887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m30.47171767s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.47s)

                                                
                                    
x
+
TestPause/serial/Pause (1.02s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-556362 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-556362 --alsologtostderr -v=5: (1.020762308s)
--- PASS: TestPause/serial/Pause (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-556362 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-556362 --output=json --layout=cluster: exit status 2 (301.771796ms)

                                                
                                                
-- stdout --
	{"Name":"pause-556362","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-556362","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-556362 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.94s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.2s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-556362 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-556362 --alsologtostderr -v=5: (1.202543531s)
--- PASS: TestPause/serial/PauseAgain (1.20s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.19s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-556362 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-556362 --alsologtostderr -v=5: (1.193857639s)
--- PASS: TestPause/serial/DeletePaused (1.19s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (95.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0122 21:14:04.376540  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m35.296192669s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (95.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8qc66" [3577c143-eefa-4a7f-8c7c-be396a1a6702] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005356814s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-804887 "pgrep -a kubelet"
I0122 21:14:40.790042  254754 config.go:182] Loaded profile config "kindnet-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-804887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5p5qh" [9a9df945-c6a4-46d4-a5b2-c666cfb68be5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-5p5qh" [9a9df945-c6a4-46d4-a5b2-c666cfb68be5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005766596s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-k7n6m" [9b7673e6-3d48-462e-b7e9-e4c204450843] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004396788s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-804887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-804887 "pgrep -a kubelet"
I0122 21:14:54.091706  254754 config.go:182] Loaded profile config "calico-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-804887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8pr8x" [51f19b68-c95a-45b0-b0a2-fafba62ce915] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8pr8x" [51f19b68-c95a-45b0-b0a2-fafba62ce915] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.007880146s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-804887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-804887 "pgrep -a kubelet"
I0122 21:15:10.841999  254754 config.go:182] Loaded profile config "custom-flannel-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-804887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-httzr" [dff49bb7-57c3-4b21-987c-cd7a59a4968a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-httzr" [dff49bb7-57c3-4b21-987c-cd7a59a4968a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005216381s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (99.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m39.571946278s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (99.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-804887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (99.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m39.90683567s)
--- PASS: TestNetworkPlugins/group/flannel/Start (99.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (104.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0122 21:15:50.257291  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-804887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m44.750085178s)
--- PASS: TestNetworkPlugins/group/bridge/Start (104.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-804887 "pgrep -a kubelet"
I0122 21:16:50.771477  254754 config.go:182] Loaded profile config "enable-default-cni-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-804887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-x9scm" [f158bb5a-6a1f-447f-afe2-f4948b9ce43c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-x9scm" [f158bb5a-6a1f-447f-afe2-f4948b9ce43c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005895606s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-804887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pfcl7" [22da158b-e699-4f4f-b5bd-1f87eaddcd0a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005778908s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-804887 "pgrep -a kubelet"
I0122 21:17:11.155937  254754 config.go:182] Loaded profile config "flannel-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-804887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-722gw" [f01c5f39-8ddb-4272-8ff9-5ea6807a91a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0122 21:17:13.342527  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-722gw" [f01c5f39-8ddb-4272-8ff9-5ea6807a91a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.01545489s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-806477 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-806477 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m18.411870671s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-804887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-804887 "pgrep -a kubelet"
I0122 21:17:25.840267  254754 config.go:182] Loaded profile config "bridge-804887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-804887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vlzrr" [185f9be6-a45a-40cb-a49d-b7babeda5ef1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vlzrr" [185f9be6-a45a-40cb-a49d-b7babeda5ef1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005433213s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-804887 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-804887 exec deployment/netcat -- nslookup kubernetes.default: (10.169614191s)
E0122 21:17:47.281056  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/DNS (10.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (98.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-635179 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0122 21:17:47.117987  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:47.124456  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:47.135961  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:47.157453  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:47.199752  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-635179 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m38.123546403s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (98.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-804887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0122 21:17:47.443364  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E0122 21:29:04.376759  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:29:34.482818  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:29:47.845969  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:30:11.097684  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:30:50.257759  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/addons-772234/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (109.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-991469 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0122 21:18:07.615194  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:28.097102  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-991469 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m49.09245037s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (109.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-806477 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1a6ad633-1855-485c-a567-7fc889ffa096] Pending
helpers_test.go:344: "busybox" [1a6ad633-1855-485c-a567-7fc889ffa096] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005697929s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-806477 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-806477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-806477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.251737476s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-806477 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-806477 --alsologtostderr -v=3
E0122 21:19:04.376705  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/functional-136272/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:09.059293  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-806477 --alsologtostderr -v=3: (1m31.083733221s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-635179 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [24eba9f5-dc71-48c6-b409-7cb2a33a13bc] Pending
helpers_test.go:344: "busybox" [24eba9f5-dc71-48c6-b409-7cb2a33a13bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [24eba9f5-dc71-48c6-b409-7cb2a33a13bc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00488948s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-635179 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-635179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-635179 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-635179 --alsologtostderr -v=3
E0122 21:19:34.482742  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:34.489228  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:34.500710  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:34.522225  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:34.563794  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:34.645383  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:34.807005  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:35.128364  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:35.770317  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:37.052165  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:39.614345  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:44.735831  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:47.845829  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:47.853139  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:47.864646  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:47.886245  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:47.927666  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:48.009196  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:48.170881  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:48.492821  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:49.135188  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:50.416776  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:52.978587  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-635179 --alsologtostderr -v=3: (1m31.194403144s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-991469 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b9b711f1-50ba-49a6-bbf2-e835bb255ce0] Pending
E0122 21:19:54.978160  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [b9b711f1-50ba-49a6-bbf2-e835bb255ce0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b9b711f1-50ba-49a6-bbf2-e835bb255ce0] Running
E0122 21:19:58.099987  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.007195513s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-991469 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-991469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-991469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058383582s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-991469 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-991469 --alsologtostderr -v=3
E0122 21:20:08.341576  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:11.097931  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:11.104430  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:11.115952  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:11.137392  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:11.178907  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:11.260500  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:11.422150  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:11.743690  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:12.385669  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:13.667489  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:15.459671  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/kindnet-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:16.229594  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:21.351632  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-991469 --alsologtostderr -v=3: (1m31.074289949s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-806477 -n no-preload-806477
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-806477 -n no-preload-806477: exit status 7 (74.38068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-806477 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-635179 -n embed-certs-635179
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-635179 -n embed-certs-635179: exit status 7 (80.326141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-635179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (304.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-635179 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0122 21:21:09.784461  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/calico-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-635179 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m4.138205576s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-635179 -n embed-certs-635179
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (304.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991469 -n default-k8s-diff-port-991469
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991469 -n default-k8s-diff-port-991469: exit status 7 (84.866816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-991469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-181389 --alsologtostderr -v=3
E0122 21:22:54.958308  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/custom-flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-181389 --alsologtostderr -v=3: (1.530538767s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181389 -n old-k8s-version-181389: exit status 7 (78.255377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-181389 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-7dbz2" [27aa0c02-6865-45b2-82fd-1aa1dd5803e8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005616103s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-7dbz2" [27aa0c02-6865-45b2-82fd-1aa1dd5803e8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005431744s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-635179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-635179 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-635179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-635179 --alsologtostderr -v=1: (1.034771022s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-635179 -n embed-certs-635179
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-635179 -n embed-certs-635179: exit status 2 (320.518166ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-635179 -n embed-certs-635179
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-635179 -n embed-certs-635179: exit status 2 (281.945256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-635179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-635179 -n embed-certs-635179
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-635179 -n embed-certs-635179
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-489789 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0122 21:26:51.117734  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:27:04.885312  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-489789 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (52.782145462s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-489789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-489789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.938711132s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-489789 --alsologtostderr -v=3
E0122 21:27:18.821389  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/enable-default-cni-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-489789 --alsologtostderr -v=3: (7.403615121s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-489789 -n newest-cni-489789
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-489789 -n newest-cni-489789: exit status 7 (77.989153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-489789 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-489789 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0122 21:27:26.086364  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:27:32.589198  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/flannel-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:27:47.117836  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/auto-804887/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:27:53.790557  254754 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/bridge-804887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-489789 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (39.132597081s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-489789 -n newest-cni-489789
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-489789 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-489789 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-489789 --alsologtostderr -v=1: (1.019325298s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-489789 -n newest-cni-489789
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-489789 -n newest-cni-489789: exit status 2 (272.619313ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-489789 -n newest-cni-489789
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-489789 -n newest-cni-489789: exit status 2 (272.327951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-489789 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-489789 -n newest-cni-489789
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-489789 -n newest-cni-489789
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.94s)

                                                
                                    

Test skip (39/318)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.39
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.38
265 TestNetworkPlugins/group/cilium 5.14
280 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-772234 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.39s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-804887 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-804887

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-804887

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-804887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-804887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-804887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-804887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-804887

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-804887

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-804887

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-804887

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-804887

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-804887" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-804887" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 22 Jan 2025 21:07:02 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.149:8443
name: cert-expiration-673511
contexts:
- context:
cluster: cert-expiration-673511
extensions:
- extension:
last-update: Wed, 22 Jan 2025 21:07:02 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-673511
name: cert-expiration-673511
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-673511
user:
client-certificate: /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/cert-expiration-673511/client.crt
client-key: /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/cert-expiration-673511/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-804887

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804887"

                                                
                                                
----------------------- debugLogs end: kubenet-804887 [took: 3.205187803s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-804887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-804887
--- SKIP: TestNetworkPlugins/group/kubenet (3.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-804887 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-804887" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20288-247142/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 22 Jan 2025 21:07:02 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.149:8443
name: cert-expiration-673511
contexts:
- context:
cluster: cert-expiration-673511
extensions:
- extension:
last-update: Wed, 22 Jan 2025 21:07:02 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-673511
name: cert-expiration-673511
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-673511
user:
client-certificate: /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/cert-expiration-673511/client.crt
client-key: /home/jenkins/minikube-integration/20288-247142/.minikube/profiles/cert-expiration-673511/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-804887

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-804887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804887"

                                                
                                                
----------------------- debugLogs end: cilium-804887 [took: 4.975533133s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-804887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-804887
--- SKIP: TestNetworkPlugins/group/cilium (5.14s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-250822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-250822
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard